var/home/core/zuul-output/0000755000175000017500000000000015136140114014521 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136147662015504 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000325052615136147563020276 0ustar corecoresxikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD pJ6泔i.߷;U/;?FެxۻfW޾n^X//ixK|1Ool_~yyiw|zxV^֯v5gCh31 )Kh3i J1hG{aD4iӌçN/e] o;iF]u54!h/9Y@$9GAOI=2,!N{\00{B"唄(".V.U) _.f*g,Z0>?<;~9.뙘 vKAb;-$JRPţ*描Լf^`iwoW~wSL2U9yS o\zkQumUi_c [Adt:yG "'P8[aN Ⱥw^eD6'Ύȟ >Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>bugbǎ\J;tf*H7(?PЃkLM)}?=XkLd. yK>"dgӦ{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjh}iL;R:7A}Ss8ҧ ΁weor(Ё^g׬JyU{v3Fxlţ@U5$&~ay\CJ68?%tS KK3,87'T`ɻaNhIcn#T[2XDRcm0TJ#r)٧4!)'qϷכrTMiHe1[7c(+!C[KԹҤ 0q;;xG'ʐƭ5J; 6M^ CL3EQXy0Hy[``Xm635o,j&X}6$=}0vJ{*.Jw *nacԇ&~hb[nӉ>'݌6od NN&DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ z/}KXg%q3Ifq CXReQP2$TbgK ء#AZ9 K>UHkZ;oﴍ8MEDa3[p1>m`XYB[9% E*:`cBCIqC(1&b f]fNhdQvݸCVA/P_]F@?qr7@sON_}ۿ릶ytoyמseQv^sP3.sP1'Ns}d_ս=f1Jid % Jwe`40^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[;:>OM=y)֖[Sm5*_?$cjf `~ߛUIOvl/.4`P{d056 %w ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs;ɾ7-<S1wg y &SL9qk;NP> ,wդjtah-j:_[;4Wg_0K>є0vNۈ/ze={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ܑ2ƹڞ7կZ8m1`qAewQT*:ÊxtŨ!u}$K6tem@t):êtx: `)L`m GƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#0gLwNO6¨h}'XvوPkWn}/7d*1q* c0.$\+XND]P*84[߷Q뽃J޸8iD WPC49 *#LC ءzCwS%'m'3ܚ|otoʉ!9:PZ"ρ5M^kVځIX%G^{;+Fi7Z(ZN~;MM/u2}ݼPݫedKAd#[ BeMP6" YǨ 0vyv?7R F"}8&q]ows!Z!C4g*8n]rMQ ;N>Sr??Ӽ]\+hSQזL rH_HI\:U}UE$J @ٚeZE0(8ŋ ϓ{+'h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%הrΒ]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;/!~x]y7D7@u邗`unn_ư-a9t_/.9tTo]r8-X{TMYtt =0AMUk}G9^UA,;Tt,"Dxl DfA\w; &`Ͱ٢x'H/jh7hM=~ ֟y[dI~fHIqC۶1Ik\)3 5Ķ']?SؠC"j_6Ÿ9؎]TTjm\D^x6ANbC ]tVUKe$,\ܺI `Qز@UӬ@B {~6caR!=A>\+܁<lW Gϸ}^w'̅dk  C 7fbU{3Se[s %'!?xL 2ڲ]>i+m^CM&WTj7ȗE!NC6P}H`k(FUM gul)b ;2n6'k}ˍ[`-fYX_pL +1wu(#'3"fxsuҮױdy.0]?ݽb+ uV4}rdM$ѢIA$;~Lvigu+]NC5ÿ nNჶT@~ܥ 7-mU,\rXmQALglNʆ P7k%v>"WCyVtnV K`pC?fE?~fjBwU&'ᚡilRї`m] leu]+?T4v\% ;qF0qV(]pP4W =d#t ru\M{Nj.~27)p|Vn60֭l$4԰vg`i{ 6uwŇctyX{>GXg&[ņzP8_ "J~7+0_t[%XU͍ &dtO:odtRWon%*44JٵK+Woc.F3 %N%FF"HH"\$ۤ_5UWd̡bh塘ZRI&{3TUFp/:4TƳ5[۲yzz+ 4D.Ճ`!TnPFp':.4dMFN=/5ܙz,4kA<:z7y0^} "NqK$2$ Ri ?2,ᙌEK@-V3ʱd:/4Kwm2$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ G׌MU.APf\M*t*vw]xo{:l[n=`smFQµtxd _d2J0BLzv8D<%P\MUfN$68X8ZdH;j a]`bGԹ#qiP(yڤ~dO@wA[Vz/$NW\F?H4kX6)F*1*(eJAaݡ krqB}q^fn 8y7P  GRޠkQn>eqQntq"Occ°NRjg#qSn02DŔw:ؽ 5l)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua ȻݔhvOkU~OǠI/aǕ-JMX _.6KsjA Qsmd  F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓI+mj(^>c/"ɭex^k$# $V :]PGszyH(^jJ=䄸-m!AdEږG)շj#v;#y/hbv BO Iߒ {I7!UՆGIl HƗbd#HAF:iI }+2kK:Sov3b:1)'A6@\2X#Ih9N ̢t-mfeF;gUаQ/ .D%ES*;OLRX[vDb:7a}YF30H #iSpʳ]'_'ĕ -׉6tfЮ$zͪO_sYq+q艻*vzh5~Yy;,DiYTP;o./~^.6+zZFD& m@WXe{sa 2tc^XS?irG#^ŲDI'H_Ȯ;RJ&GT.Kwj;of¬zHmmS2ҒN'=zAΈ\b*K ڤUy""&D@iS=3&N+ǵtX^7ǩX"CA⥎å+4@{D/-:u5I꾧fY iʱ= %lHsd6+H~ Δ,&颒$tSL{yєYa$ H>t~q؈xRmkscXQG~gD20zQ*%iQI$!h/Vo^:y1(t˥C"*FFDEMAƚh $ /ɓzwG1Ƙl"oN:*xmS}V<"dH,^)?CpҒ7UΊ,*n.֙J߾?Ϲhӷƀc"@9Fў-Zm1_tH[A$lVE%BDI yȒv $FO[axr Y#%b Hw)j4&hCU_8xS] _N_Z6KhwefӞ@蹃DROo X"%q7<# '9l%w:9^1ee-EKQ'<1=iUNiAp(-I*#iq&CpB.$lٴާt!jU_L~Tb_,֪r>8P_䅱lw1ù=LAЦz38ckʖYz ~kQRL Q rGQ/ȆMC)vg1Xa!&'0Dp\~^=7jv "8O AfI; P|ޓܜ 8qܦzl5tw@,Mڴg$%82h7էoaz32h>`XT>%)pQ}Tgĸ6Coɲ=8f`KݜȆqDDbZ:B#O^?tNGw\Q.pPO @:Cg9dTcxRk&%])ў}VLN]Nbjgg`d]LGϸ.yҵUCL(us6*>B 2K^ sBciۨvtl:J;quӋkKϮ듃ԁ6Y.0O۾'8V%1M@)uIw].5km~Ҷ綝R(mtV3rșjmjJItHڒz>6nOj5~IJ|~!yKڮ2 h 3x}~ے4WYr9Ts] AA$ұ}21;qbUwRK #}u'tLi'^Y&,mCM)eu㠥Ѻ\a}1:V1zMzT}R,IA e<%!vĉq|?mtB|A ?dXuWLGml?*uTC̶V`FVY>ECmDnG+UaKtȃbeb筃kݴO~f^⊈ 8MK?:mM;ߵoz+O~e3݌ƺ(ܸf)*gCQE*pp^~x܃`U'A~E90t~8-2S󹞙nk56s&"mgVKA: X>7QQ-CDC'| #]Y1E-$nP4N0#C'dvܸȯ.vIH"ŐR ;@~y>Kv{) 9AG ćͩ$.!б~N8i"1KФ\L7/,U@.ڮO?mُa ې!rGHw@56DǑq LA!&mYJ*ixz2*{_;IYJXFfQ* 0kA".mݡ"3`Rd1_u6d逖`7xGMf}k/⨼0Κ_pLq7k!dT x삖A7 u/~&ӄMu.<|yi I?@)XJ7{ޱ?Q]{#\4ZfR-dVaz./f+yGNMGOK?2_~3\z=y}^G$*A! IcuR.o=MZ9zu b#s9@*иrI@*qQN||Ix;I}&ݢ6ɢ}{]x}_o>Mm8S]~(EX{v8FHӜ"D$aǽO8'1lfYuB!6!=?8[Y|-ɬeǪzd;-s~CM>e:9[_v~\:P ؇'k01Q1jlX)/ΏL+NhBUx~Ga>Z"Q_wjTLRˀtL L+BT҂ll魳cf[L̎`;rK+S- (J[(6 b F? ZvƂcW+dˍ-m𢛲@ms~}3ɱ© R$ T5%:zZ甎܋)`ŰJ38!;NfHohVbK :S50exU}W`upHЍE_fNTU*q%bq@/5q0);F74~'*z[\M-~#aSmMÉB2Nnʇ)bAg`u2t"8U [tJYSk, "vu\h1Yhl~[mhm+F(g 6+YtHgd/}7m]Q!Mę5bR!JbV>&w6οH+NL$]p>8UU>Ѫg39Yg>OF9V?SAT~:gGt $*}aQ.Zi~%K\rfm$%ɪq(%W>*Hg>KStE)KS1z2"h%^NEN?  hxnd/)O{,:خcX1nIaJ/t4J\bƀWc-d4M^d/ ʂK0`v%"s#PCoT/*,:[4b=]N&, ,B82^WK9EHLPm))2.9ȱ  QAcBC-|$M\^B!`}M^t+C~Lb }D>{N{Vt)tpDN,FCz~$)*417l;V iэ(_,j]$9O+/Sh]ice wy\Mڗ$,DJ|lj*à␻,?XAe0bX@ h0[}BU0v']#Vo !ې: Z%ƶ(fl>'"Bg< 0^_d0Y@2!ӸfZ{Ibi/^cygwדzY'Ź$:fr;)ٔf ՠ3Kcxwg*EQU{$Sڸ3x~ 5clgSAW"X Pҿ.ظwyV}̒KX9U1>V..W%GX +Uvzg=npu{do#Vb4ra\sNC/T"*!k愨}plm@+@gSUX覽t01:)6kSL9Ug6rEr(3{ xRP8_S( $?uk| ]bP\vۗ晋cgLz2r~MMp!~~h?ljUc>rw}xxݸǻ*Wu{}M?\GSߋ2ꮺ5w"7U0)lۨB0ח*zW߬V}Z۫ܨJ<]B=\>V7¯8nq~q?A-?T_qOq?5-3 |q|w.dަ'/Y?> (<2y. ">8YAC| w&5fɹ(ȊVã50z)la.~LlQx[b&Pĥx BjIKn"@+z'}ũrDks^F\`%Di5~cZ*sXLqQ$q6v+jRcepO}[ s\VF5vROq%mX-RÈlб 6jf/AfN vRPػ.6<'"6dv .z{I>|&ׇ4Ăw4 [P{]"}r1殲)ߚA 2J1SGpw>ٕQѱ vb;pV ^WO+į1tq61W vzZ U'=҅}rZ:T#\_:ď);KX!LHuQ (6c94Ce|u$4a?"1] `Wa+m𢛲`Rs _I@U8jxɕͽf3[Pg%,IR Ř`QbmүcH&CLlvLҼé1ivGgJ+u7Τ!ljK1SpHR>:YF2cU(77eGG\ m#Tvmە8[,)4\\=V~?C~>_) cxF;;Ds'n [&8NJP5H2Զj{RC>he:ա+e/.I0\lWoӊĭYcxN^SPiMrFI_"*l§,̀+ å} .[c&SX( ( =X?D5ۙ@m cEpR?H0F>v6A*:W?*nzfw*B#d[se$U>tLNÔ+XX߇`cu0:U[tp^}{>H4z 4 (DtH-ʐ?sk7iIbΏ%T}v}e{aBs˞L=ilNeb]nltwfCEI"*S k`u ygz[~S [j3+sE.,uDΡ1R:Vݐ/CBc˾] shGՙf 2+);W{@dlG)%عF&4D&u.Im9c$A$Dfj-ء^6&#OȯTgرBӆI t[ 5)l>MR2ǂv JpU1cJpրj&*ߗEЍ0U#X) bpNVYSD1౱UR}UR,:lơ2<8"˓MlA2 KvP8 I7D Oj>;V|a|`U>D*KS;|:xI/ió21׭ȦS!e^t+28b$d:z4 .}gRcƈ^ʮC^0l[hl"য*6 ny!HQ=GOf"8vAq&*țTOWse~ (5TX%/8vS:w}[ą qf2Lυi lm/+QD4t.P*2V J`\g2%tJ4vX[7g"z{1|\*& >Vv:V^S7{{u%[^g=pn]Y#&ߓTί_z7e&ӃCx;xLh+NOEp";SB/eWٹ`64F 2AhF{Ɩ;>87DǍ-~e;\26Lة:*mUAN=VޮL> jwB}ѹ .MVfz0Ïd0l?7- }|>TT%9d-9UK=&l&~g&i"L{vrQۻou}q}hn+.{pWEqws]]|/ǫ\}/J.MLmc ԗWrU}/Ǜ+sYn[ﯾeywyY]]¨Kpx c./mo;ߟRy*4݀wm&8֨Or4 &+Bs=8'kP 3 |}44S8UXi;f;VE7e4AdX-fS烠1Uܦ$lznlq"җ^s RTn|RKm;ԻZ3)`S!9| ?}m*2@"G{yZ${˪A6yq>Elq*E< NX9@: Ih~|Y4sopp|v1f2춓t$+mY/%x56Ŏw& 05yV7)Y>[V1 c&ovX]Ww1hA'⎒8ƻl)*Lƛ%H0VX$_i2?V5(ha".))6))[1:))[avE2t #xm|L#t^_+cC=7_Wp#1 ;ssuF@ uqeۑe.B=2 ?Dt %Ŝ 5e#o4iʹ91o }c1h\cz9zvr7kZ?/B|9]ɰ7crtBy?*YM0lE=uOV9, Zb^MdZi"=ßk7-tFH0572OsH}]n&=}OnXH_>Ƨ3s=C34{fy5J'aND~wSZ̖kK%ƙҮ)O|sp>NM$yasZ`hliX֣ ~"І"LzPy|MǴ\ǵ_B!ߟ4<^k_GL{7OXD@$T込F2KcQbCt,?e+KZd2B;"57?y6ŒU8gNdH {e,`mPjŒfg=*,bD%^L<;fp M^OWKe^˟̈luhyEmxءqfeamxgJlp7|3 ߳FlMҼ(p.d(.V< k5U!MfAIYuUҢyi8 ~`B۾a O4m͚V)xƊ<E^γ 0EkLIAKbj7 OL1 > BX<.d,>W"`N9,꺨.jV겚U-u^yl$ęjZ/6醔:XS8${/k Ǒ} Z[I"GaI'~ް&5 /OsƏϪlOWop[)4=Nʼn@*?EG9'rQ?`3Jrʎ[q}BCv3?M^Ssw'=_fUޔ!٫1]"g¯BAMQ^;|(opX|&&[sKZ!"K);)GQ:K"ggDb< ٹ7윇M K®;;}=&T[āzO-GqEwiR/<'1ƑgU3=lo3Y-m£m]sƁ}M,1MܣTl a{X炭4sM-K&=JQLJqCm:,ҕmIS m]iX:"j<EU 8(pVMqѢZ(M M[&qjEI*(ÅcƉ !ݿ8xÆ 72<>wijaej&gÇ6w ϶ِ;7n`*i<:x?GxD4{4e 8(G(LQ+aQPCzd-8H/y\u_ EQ}=쇾؈ɘ(Ź>1ԄP{Hv y)U-I:/Wm)n3Q/ݸj^u ƸUSyƃf>J[PqOsYBފS!<<܎l{>U-gCe[,OoqeQv/s4eCb6Ծ;@"noCJmzI_ݥ߾ʀl^3DWyckb'%kkd]:qڲ[LpSr.iV{ؖ}qogؽeQ&*Kyy-mնp-x]vɦy|-^8 ;sDEzIXh.4zaUH-4![n Ւ4QKT 24_ Ě Oc{ț_r^.: 8hs+K U \K*lD)(c6Gh?Zg춆FV qC7#. ˴=$A"}<0 6kbB,#2SK R|\}k[\Ck ͑7hev7O<+>Px-y-B.[֖]I݆#г|< ^ʹhAK?72cӍ@E֌rg{LBpwx jp%M әv-ʻ+/<{wlw>yk)cI^]l\9R*RB],dȕćblO)knxmntU=,[({$=2_Uu.Ta`EXU宋[H=SUn*KZelK!"g s޶CoMOHuwr$x*N꧎k;Vn&pM]ObS^Vy-o:7MaSSͷ]Z'U{ʻʲ[@Z3DM+߂âR7HUu΋<7(N9VF}@ srNK"+%8 \-\$kr +}򬹶mL̽Ebs& '_>80?,X0.$r81'c8(7A`s'q2D8*q(8 2H8Q+Z4C?"7;8.=  րgFM~f5#g03>8C蚸I-(AL\ oq6r2.8) `A '.ѿaݠC^~90 l@5he:q,pXh!F;JXe?tʂ~tZ=Wy}.s#s5G MqD/Q&6'!U!q:fljGv>V{=B1ÛGa$g; D5ggʑPL5 ȴ_!FIvQLDތO0` B4-T#F>hd9>8S7yn< RZV8:[Vﶝ*w5mJ_puYAǜNiq4ER6#%;>OH eU2蘹kFI]|b?XJ1yz"li2 oTcϪG]8~B)&=p?Mk­>=\U/d\A$#_f1yC0u t L )\kճpR*>a_|gս'MSfE:qXV~sZH2DIkKh-۴ z oqBy) o߿=7^_:8e'y컦k[=(/pL+; -qQeT@k1忲`xvK: :bo]jvyF(J;zY7<+/`ye#DuC()#稀L#^]'~G~ٯo$Gn"Y:eGDV@Q8bq:ƌ89m(vqn! ķX'`ki9Q!|ͳu 9@eu!W ?~fT@M[.BqY p΄ #oE#@u#88"閌Pw@^ vBJ/%u:l8xaj$S@8^X7rMavGNW£וaF#M@f?4Ru nO7IqpcVQ8+u4/]8:95:іOTux^\3jimt[ijFa3N7L#=]yK_}Yw44U}.zl _ז iʘՁ@!xw LC-l+i$x1L۲fל$kw˃'L#uE:Ĭ2EH1J|ӰlPH)r7ns {񷊎1 v S \Ceo87Õ΋^5ttߺ~o6{ ayMZay\CYo;$3ލ3xr X&aE~@/w!o1ӣv S3P5<15Ne{D!6, Tދ{$?M&F_.g2?atb4]nh5x5ȣaRYpJ5G*%$"ƛ|E2-3 @ ýqqc2G-K,+}ϲZ:\xjkq\S͈(-*c,(Zd02FfCXmY Dz\'Adߨ_`̢L Q1a~`SOpq nBDlRN6T(w+f!/쏐uw/ٍ\Z|Lx(̲ )VU5 %Ѹͦe0^ }dwouP?{[tY_U_~4Afє>}az?k<*@( ػ`mS%zptim h`bfSB%}D`L[J iG@$h7}dc)s6g溽:6n~Z$eut(QUq #}/gDGY%. GH(Cq"=VohZ#X qr;Cehr{~z=ӔIܟ\E vݡًtM0 xh$cKOZd*ҝ%ҧ8[pWK7a4u%/E`IeAB3M A4,ׄcUŬ"'2sW,WRޣZ/Z.Vq6`y+dGU-nTl\[ .T`*e]XGibAqfU6YŠPǂ&fy9 Q1=IE&7 `9<-89 ~w!B\nK5t!KtFw耖&ǸA§0P)nnՓ? >DvzACOtz7[~{}ߡj|  -Q)FF.t{w[ ELp"Q6dQFID.BK`0ta89gĊL:+LeGsZcԢ:܍BB%x228jV0: 8DIXCMrT=ustK-ZP;7ʆfE3sJ^vW.K[41 kb҈q,oCk&=MO+㦵gDR͹a=nlzSƄX`ek:˂^m}> bmP l*ɂ8ٔI,x6w<v$ 38Duâ|~5BN BKlPfQB-sy ,c.].,v`oj>jkF#(ji, 7@v|\-  韕֖fo(=KM yDV@iA=Mm7.W"GS?Wm٫=|la 0q 1OC>Av7Ҿn%誑7ӄ-ҶLC t'&8gP̥QfU={iE\=g,hkP04p3|!GE<|o˄-JG(ݑP +]JSCg*ݵ[Vڱ[.]ܾݹs]kZk biG#˄[joOufϱ>3V?WNǯqt0͒}Y?s8#HH˄f/ByA}e g#EA<8;i./*UBRޅӫ, EK.qJ/ E q7<mig??/$ϳ"EAk`TwAmQR`9 񴇔$?W!"󭬍NpiOZ7zn-h[H>#fdC?_&};8=q(H*[y/i "~)<P]ӖZ\ƃ# |V0iY}ُ-7OI# \MsQV}X-g<9Q8CWMa0XƲ-OݭW::(/ <|]nxsh2 LoIRz?[63OS71h#!?qg-8nFeYe:凛0㹠PDא"G0fBo0JrqfI4Ӊ,u1T)m&0~ʄgCxM6<.Qb ʈB0EO:da!;EBY>g*zY}#ؽjxx_)hOEʸޭ@z'!=`Mi51^4.M')lfhr07".GĮj~, gu9_IL3W=`#T7fsFA x2hQ`ntmV1’B";B Le;::Kv4?v_({eGOi=K}9T$) I1Poa2 A7DMQ΄_"T9R蠘f=g,jUR/*PG(!k}S`(f hLH,2ݩ@-,0w)4^|0c+^x0':"j!J _oZ3tԷLSRԡר2` A 00 j,S_k-,uB ]"5+{O샾re }Y7a^:}YƮ3T Œe<; rb/XLnUL?϶r9) ϗHx/]V̀01,?niMfd+_t䏠Dٟ>GoaR>3*fH#hyYƫ3~$3͢R _ǏxL=O)vyߞoV%CGo9_ٍ}M(!ߪS gk̎nEXW_huH+ho,T |β }(࣊ЩlQ%w8YԪ*&[mj* 1S 1m|'7cqЎp_z0ħkCr$%Y?"zt 1lT^{`$*tAx2*S-8Vs,ﲻ1OC5v{Xd$Abќzx 5vW"ͩf|Yg֑FvL%@Xijf%k&@r(ݎ /&.O>C+J8jB#թT꓏$EA*ivu|Z_Йd8/J밌)J!y#)s&@_?8&|,#TrPYhRPb+y(D(ډ"ĵ}c_y"mJj=AKcj:QyU ,exsA3CyTpJ*#~_/o.(z iW2@s0N1Z;%1*+⫔3Ç^1n=W H>YAVӪgzHKs`c;0-s|ОXloQϱ]vݾmjWN @Wj)!, ?GCx̍v zjs{c&W$?v?g{Dbg[pEsaN aTVjEE&E+i^7شRGHk4yb'5\/Ă_i}5C21á 'mtxnշ *pɺA!x@B:B$H)%@dBWPySM_񘈏^<|22u#67h^;2lû6 c Ԓkc&$  .q4"#&xc%+Xj[.WE$)w6e)rV.U"?vj.S{~+4H3ضK]nyK޵6v$bٝ>y `,*qD4ɑfSMqRm>x=ؒWW7ഭsZȿ_/ѡ39~n~c~hcdv,g )y9S\D# u>bLE |SpѸpr ?X:pY`r9uݳIALTi ^ˌ2/aJX9K,uK`ZoITA"Z I!˂Ŧ=ܽPѸ#tJˍUj)VRLR1]Lї"=6cK_ {YxH o=O)&Xb}YY]˪.tI2c;Kb\bQX.&(;t ~5Bt'GBE1ѸҴ.V>Hp˪ʜ ube +ִ,.6'` 3=o֕C?'V7Q*4rC bR`MUd4ĞVO_EU j3mk/@J+JZ\͈1C~\(CW7,*x(1 Ӡ;|V[&\:tx`S}֧X!8 2/xS.ۨ.tXb'EPngQyV^' z4@w9HD'5۞jAOIlb3Gi|SejQ]I(q~$Q-?),` WzwO 4@\޷%ӅuMõlPc\\iGNڱ@VW!.m9OOopL1Co;9C5DKJFto5d:ԣ#R[bX~A1,!|(z0%1AQp9wߎn(A-5f2)a&M3;4p0-e!/ 3+Eq 99R75?;GS܌MwrQ}<.* w1B$a,*lewyP}|^ʃ!Q, D7DҊc ^*Yxg1cn4&y,vtuobBy5Br-,W3p+c,}đA}2I2j1?5P@g,NС,Bؚ5Ij2 !1YC6%WP :eń,3lE'xrv[Nc%>Zc)4Zpshc險^H& &bEf9̲_u0ZZ$%8#4b͢ʅrUC5E;Nc{p3$9ϕZU Wwg芿vQ:eCX Ԧ ǖԻəic7xbBJ8kP;WD˼,=3n!%ϋѫg7a`:\ύGo4B‹1<;b@a-W_H)< R(W_%E8XX}y f=PJrQQi^IpcZX!ukA?*5(p1lREиȚզ Ӯ{so5υ@:kq2 ;tsKxL#jIp= vuTI"{Ӓ+BgQr `Žk/uoWB kϥk}j`9?XAHn7QcT|CGJyK`P9% `n& h`ѱ)/k-Zb~~ӡf‘ i[pfRi+c2a[;ݧ6;7TtЋbkNh as^7b?Z,9K p̃zƦ1Q ktl)a||c@Rc1~gWcxqB; f ukŘ|9֫ÊEoHi pVx,i{i\X"nPi#v$ u =9f&玕-Ӣ?b ׵ .! #)kz˗52zah!Wɝ!QnEs,'LXOx,ّ¢œ(ɶys;{i]mοo`HRAEzO N&Mm̍xY2&yCIGoy`֗tbƶj("7.`,]GO ‹3Q`;XI2f4rc ]~m$@ΪmZQ 9b",Oͮ+B!:y~iiS 1@G?b46ma}h,`X%ĺ^+$b/-O`n j,I*;taxy;~?lۤϙYx%1)zfȫE=w9L%w75|2'I#R{'aow*OqiJp}zѰOil+Sp RVR)m M/L4V|-է&c~ _^*<毅+fow)mJ<%jYSZ?!pzl&QClL[Y\P$!fvjFȈ.It~Qkq&瘵8Ã\hr3G%EtKq3vKL| sq=B=P +H?own?6O?j_ˬ-?2Do i'~_|5oOANdOxڔnٻ綍,Tl2TdTxUv┪ч a0  I{(CxS, f2/q <ǙN8r>h Ĵ[ĔqIj̿iևּY{;bWi&of!tm㏍fݻ'ɟgcl웉+{vzDZV]kGJ#Bx4K/~Cڏ{7%μvj=̏>14>iU;^?@칸F:_~~vb7wp$^aϡY~Y&03\4`H4Ƀ}otzvvUkӡ$3TE_Ly7<~<9 &^1Eu`\םLLJ _d~.ѤstEh|cr{|Tv^ߝs5mN[fS;ʨ:Drq:5މAyS&u^3P0DU,)JFY\bL அm'/*jҙ$_T UO,Q W<-~W=I7h'UQGdIg=fmv6,a<'sN/Q۽Đ!fwG Uoʪ:)a̗YVNܗ̢qY\Vh؍3Ai,Ϋ&/̽Ks0Dt㧈[W+rO"k$\-0(Te2i_ݠ$Rt1YꂋPc$7g}2gn$q{]9O\?B4ߊKߟ|^#Wc3ػ9`Ѭ'dߔfW>ڷޛ!W&TUF%?&*&O.~ߔ?))bĵ1Fd]M)9f$Um zd^Vx}+Jy#E7LЯ|h|t1߯|n>ϪJ&}0΍Z@ḲT; -yՋA[^O L󝯜Q4-E5kMju[*g -F+Pe2tM$JIt6#p }+_oQ'7)0y@1pq7k$wp)dXBHZ&jwC&bW´Ejֆ@Gbz>vqnAM9wZDlRT6ZYI1*&*Z-K3E<"E0j5d 6YBm0CXl"?JkU(XBu݅NDV˚I+`GIp 9## ;* ߲%RaUިYBP-qņhHHHښ^ܝc@_iwі63׭D* 3ꐳHsiVjrPisjaϊo}.xez?Ln3dgtZ~T~"Jqs.{cyαJb,&V9/&Zb)͚x i2c nk6HW]^b\tnTG\WQZ}]ig*04ܕ]}dh剏|F/nm[kS_oc_fBC&=GӷwL>9̬7=J7jQ;~䧦) ŁGXYV;b2pr _;__~~{ߌ~>yt~{]qN5tX_б:15NbO Yݯ~<.˛/{L/:]3gCT$՗/\_JӋaA`)'(0In43-a!KG~,QG $dtGF^J]'^5Mo zhE㮟t@c';Իkb)%@|^L>ە>}gu(8$ۋwãxT?N= oa4N9^]I%{~EO_#xQzwbHPu3vRD(9ђjp%E|ȿ}Hdo"TR+RG[;Rb"} ̷}@[=qŖt'RXAjB'$b /c'QobL0%)vZyÅsw9xs-'r,D o+^L0^I</G 9є1/[vKX|QDͺaN|`+ωyz7HB9!тBX|af9m69wKL۱p $?x8Uv5o݊L_; ܐ8o~|÷ҁ+XJWiݡ=R:h6-6BSSA4 .jf o4"90_x\oՄu#cL G ɉ98r#.x0MB]y#Q9yL )f,0~R{P j-hB 2=^ 4HԽIobh떟&ŧA&#omoӳ 7]bJo-RW̞CR-/7x=ġ=3JbXQE;6w.?dnஅ}E1qO}Hs.%A5Ĕzԯ~ rw~$Oifk*l`tu _Y (u(qHNJ`o_niRצge~(^tsBwtxuix9\]O([$˩fRRϱ2)G<B_[PWRMn)#DnWV cP #DQIr(@]N\h 2MCh31k #zm|^O\h3ZZ:0LЍIsSJ6)%Y{aCџV>m'{D\2hq/JI\9 r{eZ~@JYd&tu 0/)F=3𝸊\%c3-{lk{5Kkazl2aPG s$ vX@lb3oRrF j(S PBu' {f,Ęysc>@5!uy!剒 6B<N5Q+b lU &jQdu~̮;~Vasx;7DXDZac.8blOPe. LnLvO; _S>7I`s"us@m .r&לP"[2IY/m5"3peӝU2^@V +SSS!ݫ@(ėz^]#y,Eٗڙ`D%QaKl2OEsLCܵ4  nB'!s%6yeȆLc5b:`rJx:,zrki &L4^ G"jSpdIl%bj/Vc2 L\y%TB!I%T'kr7h)nЂw%:Ĵ0-JO?\3f*Bč:$O< *LY%0ĦvWWs 텡$1Dbݏ$vP}Useݎ':$2wub"!c8ɅUԺ$iӈ'Zi<S|EZJk0 B<>HӤ6X^;cmg4NՁRtwR8QJ%F 66Mg6X$hiORHP"(З ;eIȳcesX`=`2a͇tM0onhcNkb argR0PR]`i..nqK`7X)nq v$ ;r`In=Ypӕ h=;#+嗪A}cuH-;< =C"Ԉݕ5 4s,|2V఍`]Fgghڇ}쬿ٿ}%cw96Þ5@ڲ5v6U&epN;\ZBI/Y`޶S\Ѳ)SZ0eNo̓T]LjZf5ZBl&1x$%>2{͕#+ dbIDcB%&BŮTRqRy~Ν"xSTJyǤ҉Y42di2jHCiT'7L)e"Fj@i*@wH8vTD1g<Dqf,VIN_m R$3jQBwTDx.GEbi&(HV:`Ji?V `ޛ nաes-0@ I alUǛK-Gތ0f>>hv~<[K6EdXIiF2T*Adb"co!)xg>!X H {D|2Qj$8y-t!LvFerpN I 4 kΓI$6#"(N61C0S[%4q+.8"@.s*k3PDHe(4Vi  Za;#\1RûD_FhW4"W@˩⼝K`CdMc2jSAsA r ud?v_+ݱyO'Cp~Y9P!r e ؊X =; ˿)zFI|cuAfKۖd`ڣǴjd8l0e]\mQ=znqklAP; \ĹARʕ:s.T00a!\co''|cO (InUܛ7,%[:=)n0b=M>5}+o Z*"dAQ&: |2CTnm 9#/gvie!VufePSZZc0N781{ /@蝶[uAvxkM답,V)aTQPj2 2p-kh;ee3U ,t&pQq.ɪ4t_ %e7Mg? E4΅Qz"_XtP#N-&]n{x8{trc\c@3{DE//J$逩T=2ڈ *Fb\bߊׂc?~Pޥ5ʷd4LKqB2J|^:K$j0. 6RzTaI!գ.ub]1[x`yB a5Qo$ń5ozy BzۻD ضZwGk(oc-G*!X,/y‚ED #MªW`,PKpjO$j(ΕV[vGG UGn0*jizv/s)MplK8c#;`VXSFͬL@@ΓL˦bO Roi6ZWLޔ"bp9\[NEσesPe,=._t_KxCVΌ ql2rYȍq #eȍCiu8f )sY2eXΩ%"bY|~X a)RrwxlդbT߿YXsZ ap[vgO`܅ʁW\}{zVt~ZwAʱNdA+@s]_ d|}1ô-% 4c}??A_ ֲ|蟖]Cq9ےk4$*Vtn[E.؟rlb5 AaqE{8Jz|v>,u=c^ W?-o= jv{:`Wr(oz+ |J'ZNF}T Y)X|V1 aRbn#Z`jkZ//R `CX[j-UZAeǏxiaPp^| Q-zfW>]O<.T>ه2ޮ쫘0|TXb!PδV1a.DyIL@ |{Fp}~7D ПR. 0PinNX9qxY-ĥʝWoy4J{tǿ>HAj _w?ُc?9c?0!dV*|0:( x"{2M0<%?; Fu9wMGXPʓ&#~}ZxY/_h4cjiL2}|LrWv>F>EYVTJR6\rG/Sl.j*}/l|dOf9{mٌ$SlzՋZ Z5[BŴ؇6Q6K=i4$t*% ZՌ{ViplU.Z7zͦ 2/oyLr`~` {\28=o2Vr߀~GiC"KCpÂ7crMهߡ<N6Ѩ 49HY~VCa@⣭35.=duJ,!~SUux nv!NilWwfj2KZ7D#&]v\wOF~9:pQXH!m}Qoda)cAw@ Op65{Y?e٧&rf&.'A]ߊ(f8`U7Kc_oem*-,7+=+Δǹɾ`Aa>.,[H4orҥLOmMqܗdxQFytoܧ>/:՟$wMejiEJ"bg,ЈgS3od}̆ɛzP4Q:%5aܩJ--cEY<S͜+0H:^FУT9hu;=D[6Qh4MTV톐t{$,IrF>"?{wO<{ӗ{z4,VXb3RBR@$0"K$&E%A)n9 0Q{_0F#:a5azM A2zE`;6 &ɮJ2òQth9XFu,48D1!U%/à io0.%l-UJx pܹSg`L>B1tG fg8\k&}jFKٽ 3pE3`"+͟:2mׂ5Tj.qB1%\>[m0 %k\|{IHV"Xe < 9;wke|iF$EYd`+:dƴ/k ZRgB$LIJFEJ {3mCuwVQ3->2frr㜴!'ja93>8Z(/^WA255RccªdcE+ /ЁdE~K^/?Zk=&!ljqpRĒ_f(zNQptF{Ep/-  &XJ0#Zw6Z'SquSSq=SSSSD&a픟V#=ׇ4MnU:Vg{{JYZ$9rn|e /cg:EIT^ 'igJp'wLu%8Ɍ,BǔɅ&Uic.#2R 羉E{aNQ%ɉpr7H8yr24kn8NiS>/?{?},?ޞ!A蓼*j9kqŒl&RKbVk̈́Q&0ɀc3hr!l*n7 9ߑ(owOE!YbDQ =*3bQ򦨝 0z6O`VY fM[K,b>iG>Uަ[||TV Lg,L6iτ&2v*G>QEMhKTFl\:rk|6%n&4FOծiwゐúiY?~b!JyAdԊ-NF˙bY;r}f=5ȧ̧Ăڐif`! 7%9.ӞvqS;%ZQnp8Nq:>W Ϸoˤ@D$9[k=3;/;,g||"?4jEp%n |yXB%d. ۟hC>Uv>ooi“U_6"$,[D&䜀1 zƑOT*Fmvt_/z x?C5JVs^1Z睛Q~N8/?mh?SV+PNhcɹIo6hާMR=3 ܼz+zv7O6Kj)s*I<5Xץ:+y;mcX#mM+h{ϣk\KK(3 N.A᚞dXFl E>Q댪z K,4Hl #(b,Hlq BiPCb&)q^d:H{$leGoȧ."+r--j9ci|DؼbaFL8 Hg$ s68YrK'F|O'FOխ/&gpiHwߛ >)`Q\[,c R uE>Q͆Z٦ T>nWϫGn=K-3;>&0;sQwg/ p%1=?6_!U=n`}AcnG͌3m~x-p.R&`0ۼ/D&y;mcX#m;6ꘆ%PI*HSMgD阋a}a>@OUqvrMP 8N/@N bh9&e$!J+XBNNEG}E>UU?<8Qu32JqP~-RϚۺ+K㠶3XV4)Zwdg%`DTɔH,,V:80t ƐOԮcdQ?~B҉d,,JiS<@dIX8S,ilé7J;[)}խe sjFv3YT5I֩Z)r-# õ3xP;Ս+|<.|hwcy1 15 #VlvI ֖?0H+^~P[91:3s*4/O@o-u2-2 [k;+,ME%WG&9MS/be]>B> Cp0+t+Z'#CJ(f\\~>D[ꡀø!jg˓熺z0bQ'f_ZqO++Eݗb,_bVQ/Bz{øcBQ.A뵺#Ɋр˳ ˮ9?,;cu}xJcW'CZ0Ysۼ0xLtCHG0ăec îtͷ#׎jJ7H4Nⷴ0.a E2 KzF'='e4*u?:ke<{}F"/SU:qҋҋ"bWz=SVoz:#Q8%7 ^RJ+Cq(-Xxp!9I 18S9GO8*vʍQ//Ӱ C>UG/+ջXՄ7/ =|p0C;Ćݯ׵w1͔؅憑%*C`OirK_(͔0_, EadƓWlT+}aTndRG+[Yc^0#UQY z+"fp8-~&|rgW>fR7yL?.Rd98?%hMDt+#WFOU_(֎Y4G(eI gi(8:'iV>6|]\cKՖzK%c/Ua\RUY7_nbܝO8ec0=Vv .COձNonFK-DpZT$h)&iFdt_xC{!o$/UNMEt8&怈 Xr8ιE5qAoⲼ&J3^qcmJv/ۍhmD3` Et8&j/Nw1}A[skǯN?|Z5J箱\i#vdTgtyϘz&iWqiӡX4b.덡.Ђ%fq*g_X^mP` {DRz+<5*jt$w\^*U"WuKp^MQFGO\#rƓM̹@M #X1]T*mm}J1E1b!H5%)xX%Pb%,Yb[I?IQSU>リ͒fpA Md_?*odcK'HE4 XpʰVM/eD|@rߵKQ{yeЯ3/ quU_&ݏ9^?~2zy(:X?Df 2 {ߤn<<~IF7N6=O-qp"3(w{mF|ݏ6'YX/ !]#=s8=G>5n+j@7nEȾ[n"9MdsI`yXʧR[] G?oE>U^}F$Ud,N9Z]sKE0- Z&#!Ag͹ ~xf=HqqI&,Z2&4I}6ÙEɀnGo( W ݯG?`C Tz~nmoĨ 5s;fyF8(w GP%|R-I"բtDo9|~x26?{ײ8_5{8TI @[c_9~$)5%DO @o_=F# x0g.(55Zħ$nFrʐ}(_&8$ݼH+pd%_X4vL\?뿼/U?^ֻjcbW gsOI7j?gط_)Lrጏf-*|85PƈaRG ];!xb\ǡr=^ &9JiKO~sg34)eX\{21]2t^m1KWh$Lտ8[IzuYTyqr-?r:u? [ۮ⿶莛٨D)DrכGQ?=8i@ v4H4H0~/U)G$BG\0G?J4jb|lM'l?roľ\xsZ+nB[gvF~DI4AC;0Gw|BA37 v.q$a{_ [x(0!CW%ݝhZ,bXTgތMi"k6;*] h#0GH?vޗExK+;R^mEΐ+@VJ +n?)0E7DnO'[͂R$&8=ΉH?c Ļx7Io~&! =EOӖn9-gjrPǚ1pI_ϸFT M9ARSͳia:oaaPm6>N0K x! ph.Ⱥ HUr!qO|ڪS}ڄ? @FPӂ538!Q1e r"98KWc/kw 28xlp.y̖0( ؆:έ3FwQi|?ˊӖn9-C j!Ͱg)P3D҂`@a>X :ɎdWR7{k3j khҙ,ɗ۠*i)tDI A9{N^Sf(PѤ5+eJp!lY}q>st[JՈ \t)zOkY.(kIMgJΘ#_z$_?kVz'̦#JOd`S4]"ȳ xlq{H~- ej 5ש3x P5vD]݅imsS rLS\ B.BFi !t'mYŇ qF l~C!Uoh}<qcB@xd\j&Eݣ€0y30Sb>T{U!ޓ ?V}}wV'vg8I A(g:f囂.M30jByx^5Qӱ/TdkU"GS / Vr0?R& u{՗Uw*p|3ndԄۍvΜ=7|6&1m??=#cwu4uB$gH߯PhgdGDgB<|TgM vIH|zH8 Q6rsPh> 2IX !a!$;'7a]oEwkIБ)xHtM͆}Jc·ղKv%C+JAMiD1~WAqBd YI !C^e]&WY;o F'1=mJi'bR;Kn/N{򋯼te?^ˏg K,A@Y1DNuqBM9_\$"G{!Bg HqPAYQE|%(X !NCuY,(J'QPODD!NC%,cNt{K$JՀ">oVh 崔y/fVρ QDAP6˘E\| X A~'m1DP>}uˀ`mx5_z?QL!ă&Q: 1-TIxn/J7`5m{i1*m=T i[%U)c s "9S,AN}P^]侹.8ڮk5ս8W߸Y\} J/iJT:9,vNw~v֔DꨄfEݻ+L>~"~Fڧrm$-q]LUx%Bq0fz.hKu0c8wL.qg(՗^?uE2f() c꣕Z)bt9y uWY=#9ze[ y;פ0 O) Gsq8/+x&K;)$n)„9 D|;㸄YN],)soT K<۸VG2 "to{[2P\}91kEXK4k=f[I:(Ŀ;}|L v+_3\S5T-B'僌!&9[ی̂K'aD<Բ98[љ0>@jW"}U+d\g_a]>mYu}'s?ǽ& @qs|;x:ye傒dݯ | M" qT"%?D\eu]lڕcvGԼ1ڿ.#! uir~ JNj80Ľ/mJD6Du-TUz?=2? 'D<u#{:oo7!ûG<9AƄ?f kJo@1o;qA17 Jn ,dcA'_[ !deD%,aTZJy +msl C3<`R0y/Gg J- >w޶, Fl!,^)ĤqpL[ 㩓%nŒ fBs* j(Za{?(_lS\S(ߛrZ?}{]X(f]?~<_?~<'G|UO?r~^Fbs 9TG"ݰ0nY-UIfNX0 ]uP'l_$&a^^TSsU=ur_,8DA]\r{FY#a5ZQ Xj+eJp!lc~W8{C}fkY!Pul1DUf=q_Th^.kȒaH#M( ')8{z<>ͯ@ Uf $O~jpcaxx:fQS#y(H/*?%XMo1*vգoS?4#d_ !JY,%*lز1sKhBHgeI6XC<@Ք /=Ϗ  m<_6B տfc&Z+nB! :JB~RLȾ/O*޽|=3@~4pl Wڊ#U†w]* ڗ xJCBx3]W^}'5y/ ]$MtjzosΠ$?bq^CiGYL V= ̭\wܝ@E'X?jsMYE%s}l!G}G+_z˘QUˈ"Fs;I]# 5 A2j|eWZSm~z1Te"Q!,i4_=%:FqHjUvU7yNxLpi!ă f0jV)}uQڐp8|(" b\T:ujS.:)9NxU頥t|+(xE8#)Bp*=P gyh8C=# u Ff2vľ:_ B|+f߰O/3RJkPr}ɳ1M8H.ҩ_?<|O\1˝5NQu/>X ue$T'm|7:WU`.|nQallbSCfo,ԊHBB"A_SUN_,*(%N/3Nk Bfgę }gQ QwzSǒ8 *=V|k/b 6˘CinrǦ\Z(Tx%vk9RE4&8&d6Bc `GpNZI9{z5G5>!Pk13ކŮ^w}!J^,bc8am77Owx:ye#d5yǣ -E,~w 5vD]!ưe XA{YB|c= W*{R$/:lY_Lَ"h($C #R0:>1yW>-]@}EJ?;P_`72eEDj]vO8|->8zHh U*S !. ˾{=_ueQVϛ\yo~Jto~?K拮L^[$EuNBoԏuQ &'ңW,7ɻȍ# Wd?Ȧ.g'?$6u>\4Mi,3KRҮWMrC81bDXʹ_:gIDl))%A?( R~XZ^ecD; /Jq0Lpk8DUeڭhMzX܏@br S"{b#0_&ʅ6dA-T|LwVUQ쐿~YBwPoa|xr2DD8*jiTvgk[ԍM&nwDeqZ/Yo$x|kZڳΠrPKjǗMsX X)ɩ9$$6$+Ax ԋ*3myIS-max*4=W%`]';o Zg|W:P* /CI#_KSSKD®ij !Hꂦil$&XEY/ CD۠po}t+UϓVDz|1MQ{+0\>E5B N0RزۍLIrQc2ykI't3щ6b U-19M8ДOdd ڱch``!q$ $Ӌ@nYn^]FZ̾eWmc}â>\g_IuCِ\i'ԝ[13d|3 q³q8)y'n&.& AX©yAH=;_;1w(;#E9 ǯk{ǁX 6U ;T}㌞Qs@NSKxGErjs5ƥS8՝0dFB=$ɒbq;/OEu 'Tp[{IҌ)$ܰ(O꛲͌~N5qUݼX NT])6,݈ q:38_d`چL:I/֐rQ@59WKqKL6FE!KX a_pxVu;"?˔k tUG[ z~s>RX SE6yHiBIaTPi)^qrn'oD⹘#"4 d$Ί$ h%LE4Saqq9Vr kĠQNRSڙJ<,QnnVi]#))~ZgcyiGM!"8}G=oc=M/ɝ.ARwԠLhHHR9]M65[RzF>mdl|8B3(B8t":ub !A W0dDL祪3TiI3һsQ;XWOctoT M8u,+GQ cqo D6Ky0 $ygPy/#J"du@![}?%*pgpv)li H:9bS)zžatuC5b:`ceT:攍˔Ȝ ;𥌝&L8z}>"fvog0d(ɢ/V3eC_# @2pV||tmtN4-28߽3/FE6,t+fygBJYLquπp bF&!fR).]U{J%^ã>'*GqBKm3u;Ytrʄ{5K4Ÿ9xxAm"ߪSzR1qDŽ΅NՄ'iʩ!(v~QK&::=zg4s6SE jQ9>36COr?"Uxgp(B'EE=MT@pcQ=žHܫ~Q L@ !3lL$Qn"HtW+8"\c KyMfMA J/WڭLvGY5K3.y4TVj-PIBb80 TzqI4 l dz?Lp:O'At^?@@-4$f),af(OTxDm <<#BwF[ .kM:>\*'XsQMA0c,^DzkcoVB2[ 9Q|>ѽ1D ƞsc\R--nt/Q DQ CG,VRR7J f~}c^1OjfjL D0s. {, M78(Whv}KD O3)gl=eiRF }1IL\,B[+o`1.(nW^7ō 3xHf=K"*ả*|Yˮ=q1.Dca%W敥Qw!G_,a&1Y<wG 7WCh%T|x uPg'07w _ןnlG5oW[1.}տ: =m=<ͫ~w e$qp!@$3V"- mO;׫Yf[*k2 a[`͋E!FE _=eD&!"~oX'{fhwO3P"WQf7_e5RlVX5٭if_+|}3vYi[/ HY%c6/g͝-Yj*awD3LmWFأ%*1&t RmF17>=` Sfs ՍjZV̟כR[Tlup=_6*-LMT/Jqw5f#[p/͚ Qŏe Һrߖ@Ppxu<׏6ܾ*TݼTY߼?llNI?}?6/G`o5$p3$y֋d[4k&r7 Zώ[bf{Okg mxe xVCy=U)0n@a )cBe,QFiI0yq.2da YN487YaEfluc>қ̧9LBݍ Ym>/5,?6cQ{P[H 5)h32=ߙNdzgizʶ'`ؼ5yIUsHjoH nʢ~cU7>~Tu]=KwT9*V9|*{S7O3+o{x#Rh| *Zbvʜ-2xnV6GfȬwB n%, D t4w$1g@?WaP[BY-gCoZ.M֬ava,o DYDGR9O~ qlc ͮy`oxdUzu Ih⾶wMZѶz./۶llڻ_]_ U=l rՓޝq(7\ ,HH*҄T44$%ӧִ^; 1a(C.4lZpoxOo"Em0[}ݮ7T ە7ͺ]RSi{HYan*"۫zL83c+8_]YG+q>ccdclm#{lTKZ}#xT,2ɪ"[eE&"2%xwIuw4~bro&:M*wTJ+^4~NPaz*e$x̦efv̙YI DJxldWR)[ |Τ˲ [Xp+R7qk5<5ε0)ʺ})-8s 5N=ІdpN㤼b捣,l9Mץ'tg{n}x=n|G̵ʽŗ+o\&SfɤM2yN~OR#SqT{a>a7czၗ0/qXM#pOqifbɡNX# %K#)e90 )# 0ސHWX7KȪ08U"iE"BF0J6^ VH'rs'F4 K"O\2 Ds@C4 r*{ZRYE&]6 >fҸ1HmEunIYK4GVs6'[r~ah~gaPҠ Ҡ6%Qyg x5`V}4Y F 9Zr;M,Yoa[KCjIaTorɩ㉺ݶְk{f6I?T:@w6J-D Ab7oLϕe.p~5[xi1hKF]+`3Z2J./Xk *1|O\0n0|fvY0K y4H)Pߤ&/Sw+/ jZhrf*ʕ0GAŏ_ ܨ߽~5ڷeskU;ϵ6`86wichT]__Ewkqt Te %w߇(\Ĵ0n2-06.&*AafK.HϷД;yR·1_A&q뷃uRt<֮`B3Ĵ+!fH . ?g)Oc{TsXegk<t48<:#}w_P%)տ3U<|ML n/S-Y9a!䴀3 ^;_P*uҷu[Jc0#v[RS~`y͙8 \a]0J-Z%r;Ki܊[WҍonƷa9bꌌ)OfsD_z`!b_[=AVՇӄXaw2hr}qLx!&Fegy7:{/a"£xJsLH pC73qp)Ӊs|z!(Dir@pW "14TNj@!\kIrj =ĩ'V<8_tU>szB؉{߽S@*'׆=Ns\6tuMo殘S"e &Mf0)EK{;)+$D0FnӉc( )ɜ"=?X`pFӼa:bE:K{kAO!Lsk=Q'Xijo9,M9E8{1Zty{Kύ k@7Ϛ@p"rܷڐhB T{$ڃSS@ wဃ5*y,˲NՃ=9YRd r o&NKaϡ[PF kOS=4+ӧoMdJ?4u9åpvI4BϴɪrG"=,W; o$LB{ %u3uw:t\28^E{+d-{h" ͚;:.kt]o"t1sz':9hF:su N e8cpMHiyF?#'&݇||&#c2 "d) Tć}L |]ɩ&4=^*B@Ne8 +ǸNJqj\@w:Kn `47Gpcm&zynV(혁c<6I>+Y6Ukx5M'q8⊜#a"!))ćgFZɧp]C'a ۊ[dpVgg)c Z:SZEAMr #]5cFdlK,Op4ź9b~rG3"mK3?'ϚFX+طǓ| RQ(dBY"9Ydw>D-!Ca䐤2HMp9QU)ıC1b1Q<5`tu3WS݂ kPd  hl '+[}n߮u3rRL Mc1 n?>Dske ]DP\@DL1(k iqRkLPB;h 9E؄ A2 \c*5(K)µ0=t{L2-2!1!:aL ;+k gJ 6#D&g-k߁禺 u0|mnoVʹW6׬㖼MRBMh+Z#wgǭ:)j5?;yrNnNm?Ќl5)U6[O&y}(MmOK)!)&0:ɴzZD]ƨuv.Qg sN`ځТJ)U PZg'+IpwCŠ2'MyD̾-嵸Q^փ%D)[ga~:6+8 . yfRv.A=spSɹ6!%۠^mjJ*ϐkn\0|6i/96KrFϵ,w*V 0<[­_t[ ڀ⵵}6dXt@h+U)*L'}E rݵ,iͿFmAѾ2\XmeQ N.Е$wVp()yIt#*$;JDgI }}!d={s1*YG/I>*q$v]od:nCc3v,-`kQGV;r=w)Z +[5_gh\1uuUxI0셿!'oWzAkX ¿HP9|ˋʘ2{p^5&gꚌyΥ>\H{Z/ė %`ko|+C''YT }.et^ͮ*e&5*;:&dtu$8 >xHD/8ݶj{X-$\s9H3IDٝH&dY.W+" F5S}w )r }tefms1ŶX*xw-wɭ5{ ^{roJ8x~a{\ tݮSpWVSDɼKyλUa-;*D%w6n]&LKMY̅}~/; *c7엮vWm+ yH,At/U℣$BzHuPJ](C2 QD5v m]n i?[N \B^xg))]t$RG\𞤢\nTOR5񞚀D_ q]VQ8X.]G◙)T2g6U-6*Lff3~Ci*iJJo!7YܡW*%rlj|f}Q\}O>0 0~}5}~~M׭f+;[ěrK٘Sѱ־`˄; 7SMU޼/itIAYε7HeO1V#F@X vvc$H%H`"L\e[ !9xHbZ{X5 {We M :_=ۀJ>Y,vNz8{JpdA.ruyv qo}P<ܴ̐⍰p|C|+A|#I6}蝽u))}$hMSN[]LY]Ŧ5#\j"Z㎬i-?>B(jȮcO薉yi:Ӽ K;s:3 vX1rntp kָ8Ѷt3EJt]ջc־lky ?om75,mJKJo2/E=m͞Pǧ|e.eoFUoz%n i!ZOB.E7;VB2%ұ[R ]sVe>cFg\ȥ4S5⃍Qatl~F:짹4QC58mMҩg .VL3i-6{b:0J RڎdsrŇbhSK=݇'|i j+0ZjfJ@uM+5RœkY{n ;Lf 0kx34p'-Ռ iTUiԒϯ=ÄiyZm$Bɨ|W!'c#ZZal1K(ËmXg dGo]udV@N0Q9 _ E*USFG>"R(- O9@EE;A^AkaFh^viySueìnD)SAWy]EŢˋucpm̭TO-Sb"%р0#Ȍ,Hwec CPܓ*Yl(Er̡{Fkt d{zm2ڞQ,f^$W $$ eFjuR":8J(k46C ZyM2+V*{8M23.=W'׽Xw%$' cFzP(_r>Ü6}4F)OKV1;iG]]أ1!ja&{=CwP4T Mw%sJٷE@XهY4Ө0m (hVcQB򠷒!$JET2R,zpKR~OG3"A 9ɐQgk#ԭHw(>ԴX}Zޫ~G7>_żL@vTVB]+I|ӧ5 !ɜxw 9(P"QPG݅Z+2b7P@R@"pc2{eY+5Qv5+ڱ-" 5D7HX] @NK,brjF>zviuj 3 7e#[pqm6uf 3?a,*ga1=TV( Aw ?+B A1.'jjZ e"A=i Ѐʬfz86 [Sq$TIc!-!ZvH¦lҰAu |4\F9KPa ֛]e؀q?txxMLcYD0u glBgekOnk00ۿvV8Y:Z:5WZsLڌQg5rFCo bLL9:p [*3&5j7LJȀ=a]˩4dsCvviLl"&|9R jw*btBeN V3f LBGy׹p]OtCPQ> "W3 ՠL._rgWxԘKa$կYEJx(ᡄJx(ᡄJx(ᡄJx(ᡄJx(ᡄJx(ᡄJx(ᡄJx(ᡄJx(ᡄJx(ᡄJx(ᡄJx(>o>P0ئyO zP2P#E PC %G%r$x~ƅ)|{^6m"gpT|MI|UG~7\ϖQɥw{=fM`ѷ|ͭ_k57zwZy3/񯭪ծJhqzWī/]XRUiY\Ia z|zѨOE z"6MekmԳ-[d0 \G|HE<;;_-W닿=?Zھ1Mkܾ NK,揿/h+&S /,],.r\ͺ $ǑBRqIܲw`jf:|yyvӋe;Իb9KIҤ_.A$}/&3l7sxpIWا>6ob:3YtO7\nul4i285͜Fڏ5ś'gmu|zz-}/V:X5o^/!L[K*A 5գ9JhfBh]+p4dn5{t nU*L Pa7}ȚР%$RY"TcAm#_?VCx .Gx fou!/]mJe&hp~g_אy[ŭhnm\])&ho,t͓5b\q͓y y4 Թ҅`;/R$qv56p87-oe*)o5V_d2fv&SXeҤfcb6o0^m$$(0ļ<z91/J'OaZ|LtS J-Ѣ"35;˜̦0iW6$!y!]fW #+m6&]ejtVC!+8( )l FMP+Vr3WAv3H(+5C("t0S\VS$#PgJJAGy(qH1}, &X,XlU'ꢈS#zbrր\S|Vz4o5cjAP53^ⳳކ\Z ѽ̚ Qw0S/mNc u! ǧ>=|)G ¼1emGNuG5Q&(bIF[TVIߡtFݲA0酒SCS*%R#lHN5"UR!(T zPjJr0U.7 aQ\Q,c$c"CcR+Lw,8*,J#2v@26n9g-ku9Lgެ&'X)'x 嫻V|3w]I]s.h9ֶ|[6uA[)–:wrs^9tBs~:`>3:<9g$$WV3J(%$PwHgJLf㾛~?:_bg; GT[@jPMTK׾HmSb=6P`qr)/Rb&ǀԺhdHm`Chs341l,[Baa[Q7_=6]h?ў!G lLJY(5ϪlDb5`l5b4{xyk} mdy`LtX39r 8M(rne퉥tQ0A[HH!Zt6N1\ }}vȎh;޼gւäT#TMDZ#"Q+׈HTJkD˚lܥޖ6/%n3g_u3c-MLrIy-[c F$V6Sbs tiq KrTVʀe.I}Z^pesNj/PE֢(E2(Za=HHNgRZ"ɘY"\nr >ϧ2TCYZVn`סj)I<*8AH NS-bXlN@<ekj}zPO)HwD@@TD 3âҠ j`@}\LmY"gJ-ho_$`A'GeB} K 1hޟ-Zp\r =V,||>HY?g1z 2u.еjzY6 l^^i1(bL&+0@8"a5:$3+;v26wu}6[  8Mk!B3̂\ .ftDN&|2~bdJ/SG_r((,{vI-1,*SL]5`,Yys[bTɴ^&?+qY0Y\˦8`Fq #q eIpvK'`:]79 _m$~q*{2sljVἎ6"}bS;2;ohLi?|]w*rK〒#j8Y; <!ABx\k]ADV}{*5v&T Pivw %}N obKm}eqtY-˵JzǛi-wBm#I:#] CڇѪu:|a8*`ZŤFBOc ӈJQlm[Z!h\6wͦ76N * +J7^eso/>ן`Jڛ"=5?xuWCxb[ Mi-6.2-Q/-}[n)@~f'U̷k"8|'CR`nWl*Q?X=~NTI,y3~N`GA+u . wW`B@c: )|6HKD%(ڪHv=z.Ҭ|sK&iw)l`Dbo ( / PQ0&ZbF&9-`ǴI\ɾĊwT,[>ﵱX}Qy9EUHTTxT({N'˭e:=lLnqq6'!4QB#RhdYt.BX4/Y(t=19w QkZuYn#VRdcYK+nkϜQ!x#m& kv"Nvs\%DJrltw].p w].p8**TY+FQbT+FQOĸ<0O;TJ *Z[muèYhyU,@eԥ8p!NXbj" F01 &mPV# @ai9I1?t9w=]I&jY{$M-ʖ`l¨{" Fi$;V=aB;&ϒۙ^؇$V3X,w` 켧24Ahnc^wXxZh #v4(* %"!diDFR֟Q\1 A*!lhڱuFΖrvx\~>>sa.*̙ZYE05R!sIcbf#FDQJEK\O}.28Ca(peTx0b&`+MžZ)P3妃l'e;i١.5D8ϱaLjg>zb\@(4F(2 ފvpi;#mio#_"0  7c{2䈥XpNniuRYt>^P3UX7 M]juB(SRkǵ5L*i521dX{Hr#EAJ,݉ntP3U:x_f2j3Zʃ 6@"er.Hs.9r=,wr=,w-Zr\2w2a{Xv`r۔6%MmSr۔6%MmSr۔mr=nXk {3Y$@ ! ~ц\1\""s?}=~~|BT~Ta0.)ei 4lQW+,[:^UW^p<>ڲ >k*=u~wq_iKv`F~uvxʳCCfgteH{EX+l4{%(hѯmu^~|E=f3բS,\e:]YKEx#ݤ$KFΞ}65*.Y:PQ$Eaxe;w#a|Wn!d[( TCT mh,UH w,"#G̒۞& !5vO}5`U(=!?& 뙱1$&q"Os):*0K>k NC;+98冸{ 1ݮhց`3DP=03lVf3 (lt(|<z3J`EIY?|ķb(:GYYhmhǩv[ˀ/r[=8wUxSn2&`v6  LnmjFmSD`&ۈmj ڈ6bXˉ2nm[[ƭ-֖qk e2nJi˸%2nm[[ƭ-֖qk˸nˇm]貭Ej"uںHm]^dwee̎^Eï\%&?-CQ$Toшb/v A`@4TGaidIM km v&P,.YKRB F R-49Y iqLQF9dT&NןR3=MBsQi)]d:k-Y>eV2&F%'M+8f!b)aViR>Gcem\2˝jP@Xhr \< -=2PGᅊQJ& b  퍍ѹ@u2-j jqZ;i~g|+ rJ\<@O)=|R)W [jG#cǓbciӺʁ"^wxS:}p_u!x<7 / VD +XE$J;=uHS7:MjknGh #0ѣco]|7˷HaTIR1^gT>'gp.UASbI<[s0򡬎8[Whxuqt2ʊ[rYyQN۳_ qƽ%Uqwtz Z(;TJϻ6&'>}\/V4}/.l,~*o1Ԓb}KMefy6O0y-::g0*7:V7U¿Egk$7,}6(TdqDI1b;׵xe~uqtūzW_;%Tݫq#0>-o$A&ZP^kM6M5 ͚fo4cMz:۴+v-ŢҘ-c7/EgT䙝P()*[R#UTg|iC}vs#~ <@t:>]k&2ǍC;`pWR :Щm!R 127 Dd`,&U[n;#3#xf>"ܕhp&X=`9U$Xe :QM.xJO<Wo9zeW[ŀ+zW(E*44 l!PQ(糃)?-\yeG&)u|.Lvf_@e/z.%E-j9vgSJ"{q ˺LbJV =+u 9s?gWW>M?MkBF|AHhKA֥י>ҳJ[T<f[`D1F~oK姟^~yG&p)CdSJ\@jl4GI~DPrL.uFf<=8ݜ˖#;:U fg Eֺ8j(D_K.IP4}FGɰb[ѢUS7:%SC9KFȺkjGhY~$3]wש4g/ #.F`zc2V`P!zMǒz04T8Fu?ej]zjųv?Mê|m:M*)pw谢).MJ*"<@ T)Q ӏ3B.6RO󫆦[q}DZCnirL\~pt{3T'q4|A.fVa܎O9]lhz.X]L$TnmgtfYn+6<^!#W.i[UG9*/ _nF¹pYC1XiJdLuԯw׋T>.mrW"."{λ)3uT?L*eL }rr"v'/Cϰ,@'35ZJ&TozzWUv &cefuw{Qy(Ӵe %ZX@2Fϴ3 4aePIg>̮:V Eq*kOUu3b}~RIVMzx5OɖXL~ʢGzZ=ReK%ˇ[ze |GP>Eّތg"J$̓\_]%&ҝMx}T C1F: :A2 ) T^>mwCl6#w&Pc:UJz~^%m~>xKco* W XG ضɼ;U-PϯvqA{:''nhU/+r?1||~\}|=΍]jۺݲ|ݒVՕj).װ:mCѹfC^)3֏gQb9wΧ`>+k"}J+1yVYBVm1(c0a# aϠvG_}O0fHӈaDMPٍZ식J,"YPRςw2BDs!2$x KYM$÷Qh)IJЗ1&mъ4mP60 !rfbr_\b9S*.v+JV$q00 t2SOXK'}(56mn(`=w4H*5l: \`&]0 'vLX?Hno[W;% H gW;Ep 0vf"A#6cTaZh!+Ҡh$00r!hQ\1 A*!l{CӞ嬷tB1+s A sֆ3D}@VLTGҘYňQllR "Чܾu=36w\A(Fn"` Q!xrڄ`XJej|2I*w5B]b".1^StTm+96L1GOJm1:o@I-XőfOztyxtRuGa秛|pm@L0à D:Ƹg<`,CX+F+a<"0,7_o,𱞅UA<":!d)&GT =$^w YND:z],]icufr966d+YCGǮG=D<Wl삚K[v uꀄ!9rII uN]V5x}O#VpDQ$ 6g,:mFt!UtCHo'1BfGT(PtĞG/f4֠`U %F F:5yT+6ޤR[ Fчrta"Azn'gp6H7Y[(&cw`O=J_10%Q1A7`ȩ&ʀ)bI&::Rb&y$PQc(,<Pmr5ͲAI9Ia`J*I=5ԙJca5H`H+J(jq0Tr&t¡EV6Zg@9Vq$\@KeV8e*2S*WH{4&jciF\S7)ɘPcb`ZxG5`] `$+l6NW-k?u>c?\JgA[Ǹ`Ugۃ.Dh6SHr,%& ~O܊G۵vU,JqqNx"D jPD(RF#ˢs1E:iH^mv,Yyd=A _8%gth6l6͸t,5dm3xvcd\t_'\**I`cO_ԭ>uAXIo#@gH+)<ᙓ6*(i$pa o^삵PͦZRW_`~ ʠ<xRJHn"#Jьd3k,PV8qz0:^Gbu&:d}RQ1G EέRȰ, Ƹ#aREbD h$Cj=S3ڙ&kγrn6mnnrN/戈lD{tj/f6:ڠVQ{zn(_{/h5Z`(3*h2:ڀmT 5 !73gqs 2k^2tfy b'\z`35LSe6>yפ;n &uDf TjY:3;˜̥ndס{{{y qa9y2Y!vB@HrJd@JS W )Iџ|%;4@' \E}t֋RQc0m m )f <*_3t\3?zHY4gوxQy0p3O]?tBK*).S܏VXydaWx{T^yMA+kw4k.-gƟ !D!e!x0kabTNˈi8e4FoϮbPKI>ߠ󁳛gHjXFg' qf%*J냏[f_lZvK>r]@='.<>R28AZ.=}~.!yEB})n@╏Gm7 ,3A {ײ9 Q#Js/qea9Q~* y23\M|TR)xD%$@Sbfh/#Et$Lf14vuvk]IXBg `*lu4H&$ >r M/aO\, 2dL+ jbKFb$S, A9p0IX2$mA)ig-aeK @0S:=yi_?)m{B{$Ksg7`HzbT)y/*~qGeo'?z&TtT b6K:{Bݤ|/^]G׋m¸@[l)ϓ34ͦ6@_h gndJ%;ig%e0AD̜ל9sTmeTPs' yi^K{#(+ue &5Tӛil` Le#¤_~=o1RRJʬE49_/OkV>o6MJyNU<6*y-6=I:on/( *@īEJiı趉B%(EB.SB#G7] zDVEObޗ[#M"ZwWvVj}BNGQÓ=QcUcAs^;jT>UHڣrsõ@+C$qjlڮB.*pKwtARqz<7U mk{>џU9SF`Osw2S!7풍AVÞ4<+MoY;l>!j /\)% MZ8pzϬ?3g'+/.ih:Lgg[G(Ǽc6G_Q*fp<JnBuEԎp`]q[{PAu\ Fv>ݛ;fANֵu:xTtm 0_ JWԡoëC9Z)EWJP=H Sy l´fVt]j;L;Cwn=5G,w24lVss\xۅ۷qh@8m/n`ww +oox;jmxUI5́{QVmzLX2>,_` Tngq.?s9/а|D׼?X<?7!nk C7""0$7>a#.l2]Ε11&QFmƼg چ#T֍rφJvGi=/߷|ex2/c:݂~ʚӞ݁88TH9}Y~>'np>Ʃ k G-}hy3ԗTW/,EUPd  H&#{qnC 1f?T|P;*i{#=e6 #ȧ5U% *zVy?F<7]ی >s{="QޛcAt]lAGPzkdRcu6J õZw gб;z7.ߨԭR~*+(F955"yŃ&_G=`Pv`P>=Fm?aJ6<{69,X[dme?VE*@d+#14: Yg˛ LJ)8r_,bcƶ鳳UPRYz,[c\|9֫mЙB,^N̵̍},7Ֆk괋]z[-w bLu]{eՖa]okZ],[d:1/'y'ZĬ_><=L$ y9s7^\;XMofp/eW]%jgM=|l,c~ VKdbo&8-%_7,+[/ӉŵvPNbTl|v/@To Ӏ\J=y3k-oX{ > z䩷6S5soJ\{jcT*F32&>eċ0=0bI y _'̫,Wg3SC z7ϚGgIh<]_>ؠ'R}MuUBM}l%ڎ5}{fK5A1:]ൎQ ,Қ !5A$񱯅(Q񀀔X3%QpHrc7A@6s^m^N ̂Vv C󌊔݋o·,=&ejY#`sd=Hh\f⁅bdb7n{uحtߤR6 H'nۅ2 -:-܆ |tur[MCJ6ǑdUR]?~d1 puSȷՔ =KC sVZ0l>E~,2ERՙG5#Hv›CM*Oݦ@*i4ݶQL҆1N#0XH=[eg5so γ 4Y:/HD.Qv0H0`ќg b3o3_+Y]iBt 49Y|4'Cd!^>|N$E6؄H+/1hx>CyZiHBEmw1!p` &i P!x 8ȅZ8>Ҭ;2Dȼ'Qᒀ}ȿ|W5QLʅ0\m+V=CdjGAnA .#%G)#O>E 8,',>D{dL&K vÚ)"3DN#ń$N?,>(_ 6 hx[ #{%QLPyL 7=h_aF42Bqh`[!J}qJQbf&hh=(S\? 'c5pCmMm) CIJs?c+x"4-\Ft`Q,z(0 H Rj8 ~8.Kܾ Fx\#Bg" ;հepmΟٯNa2U<^xs@->/7T@.J1Rtnpڹb |g¾WAXvx3/E쭳qt`d|$F"2ѕ(T$17(YϊL"8P00B.4LW0x-d⅝>?N: CwGgJ$}#%۳9T ]\/nfՏ\m)T!)#-4BGXUris~ 5liiŒJ2Y[So-{CrqjLƓYt27qMWnmIU `?c5$hH׏t4 iƗT564OAEOjxcq~v-Q;*AG-4jZ=5Ǧ;0~:W214r8#L??=6ǎϮutw??_?՛.|}|x;WE5NA=ߎaGL`k逡񊡩d-Uo3j0;V喛 oqlE qlPjׄ\V]W` 7I'NQ*XT4ѩ 䭽IJ=lv^ZY:~i-ֽ Q=_<[#b65k(r0IZeXD H~d},n$5 pnr%|M~$?N/@{T31r'Md02 #< GSCayG* G<2|oZNtzYFk.w&v&uogV[t#4>xz7K?5rw .a\ xoʶ5q-=J[C3>d?G6{a+ާUX`9ڪX~,1C~[xEPIX%= !9?lũ 0BQ1^%3tH %v5Ӌ I/3eߕF?v 'SC]b G5KWttsY G~9t퇧41#Jٛ;2uZv0XoFM)dܞ[ rҟβV36?e> f=xԄٻ8n$W: ubyS0ErMM]Udۤd;+DfKT!aM2?{k}{iwCU_4YH,.S@m(ʣ'O]J \*l"e}~d]DWFޚ醉gWל(#+å_ u~[67 ي֬bvJ6KOꌾ+-ϟgU&ʟIۓz+];iPglsc"s]dw\dd#E+.E A'}CbT$֠Q'$d{nu={("X?ލGm@ὉMcn t nxԈHl7J圓@"D]UʹL,lrJF Auhx4txǦ14:Fx5:Cȸ%L5 /^凬X0W~*%1ltFyǞ7uPXᝮSbu=52T᱆z5srɵfߡXXq4{qY|~ݴ03i>Z_%o/.Ζ 7F(d;S‘.^v n<>=[,y}qQfkLF8?i7oõ[:ۭYij6 m;Og W/v=%ۯz eԜhPAEgPd!r'f|2w^2/J}<W{{mnEZ$0+B,Mp/JDw{7R[\#?GܫѵL/^|vvqf;2RYt)FxEڢ-RĢP@*0.|NS$<_{`-bIEQ}u Hd[ zG-EIƍHw1E_ 1z7Ϻ;OQ:!w! ߰,SXOFai*:S& Fdʭh`q =kTD%-IᑸTbXO֡73@\VyDB%QKn?EBAW (hHTQ y~.>.:2ڴXkHR#'HUI7hE1^ m>z$Mۻ1Ai^PQZX" 4U"(I!)KrYBE202S$?iDkr2Ӹ!>2qO|𔬨 IDIdEI(!)Kw# kG]\&1kgo ~VF:yMCWV UX+G0Abǂ;Uůe}[Xγxnsg^gt '煛ZTĩm#kS$Oaf2-ʊPHMQi;~\Bz9LZBEF =zSB"MFoѨa'a)zg>!>hcJQ h~d3~Y6C8OQxCU7g?EB77xT`zyUE,e$Hw ى\nE(EBtwPjjMTF1AYא&6po Mg#uRԨr!!9R%2c-1 MTP%Du)-3S$χ`DbH-ۄ~"xO;ϒY /4-#9mVDϑ* 3EBDd2PYB EVmQ% =ӽZB{BQQm*Gb~V+p&K}D]FO< y0,\(Ju"BlR-@HkxDE`ER/)cjc/N?xT`4ЮVةUz}(JXpv5ƊҠY_O3O;͟h;dcͬ&uKh-Z춄q~I/}u\Fd&I'e?ÍnBKAd4D"!h2SLd 8:}-U`42lƐ6,"a3"+3ᇚA^7ڦ%[ nI(v[N1ھM2`hl{ݖ~& nIܳ^DEFÖ2 a+v{NJ{v}èFJ(DIa`0Ӎ;YK`C {]xxCLW4Xa&]FJbm銵t!+(!+RC;kbu']?]C{'+bG '!nW҂ۙO/j;yyW1 ݩ A/?g;G }7}J.-?Zgϗgg'){$m>Axvv zkb&\Io #!4<}g| ˜j-VM.gu@)fbu9I5p'-'-~9dY&wLgm1Ya41Y]c%cw1`hlN,:~jgsw} ig'us\ksiҮ&!4A.?7o|ujyUM"/bl0.瘟n^t`¡ ( !o*TEMu5ʔjh ,TU<%woJ&[}e#3B\؂\pFr !YWdICG41#CSY36M1 k4XiAC*Dn{>]F,{m9?_tR)uTB}!Լyǐs|XR lgY& S )2C tP5ր,$xL7IY)D.>;>+r!sg$b:,^9ŏ03T,ڶr?O_F[ W.eSp&hR.[d/\USs%JҶBY]p닗gri1\?<9yzUD?ϸ2~ gle$ǓrFim<_]uWf׻l~\ ^m[WTn3P~*=!27 d^w v+_%%وR)U˜d-K(J4e7K*lq2S[%owW~xF/hV*JY>?3)(9_`쑴)k8bzf :#DSOZ /Dqfxd`+ N㲲o,~w|㺟KUN<]ȍD|{?<f.JU{:[2.l弻Ú̳u<./C߷cQ;!w[E,^?Ȱ\/i݊|s6T?z#Kuhu:il&*DT:tR9$t_ "\.P7$qٻ6W|C"pYlo XC"e-S= ZI}H2Rq|‚ޟ.NA]l$d~VNlďŪQhur|;N{;p2Mh&Qh2=2JCZ ~ѤR mWYtpTNK7G_wDo#)1!99p ͽ z߻HI9TYVE&8㒉3v)0i\L$Opog"1 ̛mdapGp%[?2 6 rn/#H澋E~~B~~m-7oBGGӳK׬b9A)r yPʐX[_Ӿ]·xT7|z2vqؗz0?̃J+Fp4|o~kiVn{mOJ鴩܍wݬ;bڄQ-h*]|4\.zrg?cr^$zmhۺ 1.xt\"D6rC~{eu(;5D5nb$x?_<~ûc.ۿ=~qH+0HACVjo~@׾Eҩ Y|~MSn8,ܚ|˛a?#|vtz6viߝ`0]횤 *zzi,v:MԣۍQYnW$rj&p-joWŝu4FG]+It -i|u&^+X㬼" gSK n/qSVw3L,EIowNhs~ rW/9nTA@d9X̄e) e$/5e ,;-# \լu#z.DE3ܸ.qϞZMZ#T@ϊǣ(Nw\3I@/,8)UB)V&8=?ے|EZkFIDn,ϠLzTns 53m+'Wrdvw Clt5jŀ84$`EI@kA1F{nR7d4izy잘}SZY4q'ޕ틗/yd1henXih:F@ݣѵ qA!<4 iL[P\*~H# 'f%@[V|n^td}J܆%تT<8W)tAhif#O52N8>\` tГeCגe5{ٺQGT5+nogA h$=&l8 yuTH2UR4*U$R 2Njm8㤛imGE/zW6_fvkӦmiҿ.?O'rHkgGG}܏6ه5}=FQDqƚɤϧ!n]Cn::{cZl6ɄѡmPR'Nʛ҃ 1-#yS·~D4j0JhkcR9JH<3ת.FKORrå]gY|J\6˃TKoRR~>aoJϕ=3 kAwӴiO!ͿOH6=xHnq<4tc䣌RPyOo R__f?J`0m ڒWg3R͗mmwPoGPOIZeaHKL6;\jmQ%rm|@E$KZU9%A@z-ydӠofxPfn+Wr&"}ZFYJ;H#H~V%ʀI3XkM2o]p4ewi_aA%x M'9R$4^ڔ+NU 1U.zW}Cbf̾iaWcyUPXL SAɈ1&چm4A@/!llr*1&`t3V $1m*(0VrElmk8{lO~٦u_y:F-[ P"٤jz<\PA9p!8/Jx%f)jgm*NBeʄqnk[}{M20D-D<$T`Jؒ1;I)qoӎ3ҴC ”  NADV z&M"k:cJ^g gC>ۿ&]J%/fǵr{(` c$OdhBu,& \ C^1M0Q[4$Rr֐ CxR"&ôCH\AB$VkP+NI7"5n|IsPNYR)'1k2 F#}T` ʒln&^Hp3XEl)dҿX" *J^ J*訔NJ#.(Eh-#j Vg8EOZ#2jE7) zlQDI԰P*^050xUqtv/<;tܡ_|29= +qGjKQ8pH|vg _)xj\ռÆ-dY_Z^E0JF&7Kiֹm]R%KLG <霬trTญ 08LRK< [ũeOgYiK9Ug9!:4):L73mS[8ul4=%"_yu\ #yv(e Bg-=s^8 ]%R 2mqp"]Y$u$J@`dL)"'PRIju ̰kN3.D2Ē 6J\K,`b=d/0A։[^)GhZ?R` s85DeTޤX Xg7/X<YurZ]~ -1k%1*Eа R@).,^.HwYxSLG1 U@ 8a9,cF!A)B^"2Z5 p41%eK`O#Wxx:VNRچ_k"Ŝ+*Tށ0 s@߷ܽ%e'&`jbu6} =eJQE%Ya܅e^*a2PI.1҆@fj,پbhlܿw{D﯁X:׫Td̊)!W>{DЊ:Jh#PϬRJ̮<\#N) lҕ ig_yu:E,Ȯ~t,OU?Zl;#u )ZdZT;@iC*Hb1Kw&2LE5.z}'^WsZ`.ƬI¤*e3b#ap[hSϦf.Wϥf튊u4(h:xm?=ٯ Ҍ8`PĪL*nA-f4袩2h^)PtY&&ϙԝZܩj`agRøFXZ "gQHhWhW?+7HP:Vn\LŻAO\Uk]N^ݙ," c.8SP\!ڮrQjz̉g^*vTjͣߙ蒓0ƏçQ+yb M5:;Ik^J"*I[!tNqt 2۠mY/a٨DvuwЛ1<\G4߾\Z\D9s]ʒ -$ i"&p#s k/eskCZmz^CRnݛ\Nܱ|`0}'N̎̊FX#$dB"Ѩ`T' -2P$37>NP;9/ò6bp+rEP6JL Y4ZbB!eX$dZ ĸebxcqƌUPQ+}6dȅB )4h{WHPϝ]006{b0ik\jbS,+Si:breR/`0Y$53Gc90,H+H+H;%5N~hGA2|EԠW;|u~o DP  +qXPaUI_YiPJ㢄Q@~[[?nvٶ>˶{cn:/AL Qn41 {PM"C]^gx^wFh]ASjպ{@6dlgZVвu oz_i|&y@[-CJ_{?G晇Aht~&NӸYvzi6Z#Uۜ8W:a^g.cdg-+NH~dDZ.E*8K!J.+TUOdJ֟OpW7&$w60{q1}5$}[?ZoGW ͗xfu}-v_iU5mOufѮ+Q}6}/ۻSo{% n n7' $_ n_Nzˏ av+pv[ -hqء=j< .ρ˚dAff倹cYvU@qfPlC/sԱjj'CzJM ̤msȸ.FQ5IxtE8]*.7Ŏftq.xۨ-0[g: \PCjڠ 6 *rc⚨j4I=NHWTiKYA`'}˾kuhw%xި8P>ZӁ7$oCahPd|5F) ZI&F1f<#N1BxX9$ [Æ˟cZυ3,0E~ZvNj}β6CuV+V|KUXg%cVʠsĮ;k:<3fK \:\E3,@O XFSGB<-%ƃ H­Qh)J3]  ya`"Z j41c8I1g2l~do . d l/ eCg4б@ ;GSY:~W9MTJNZ_()uGӨF fsQ0J `hF0Gq3`0'L8Ў r!'TbY\pr #24A"e$wXxZb4DqV"4(* %B# 9E"D)59bHETB@43β g j\}0k= 5ŐKC œ QUS#18#iLTLPìb(P@nAFP^:8CA 2*̀io#_"0  7񀥲 9b)`5ZV7{D`lPhxыk_%r,Xf|E ! NI08@ jE`!ɍ) f~1BA6\86Zzmqzc%@FedYXVφ6~ڞ \H4 }m>7ۮB_Wpo͵F jHs2muu8h/͝'7ij|X _ F/J{Dbomw݄ ?`5fP!ry>jx¿}^p1'~ݢ2a^ɔH$aN(E+fB]-Ny.'<F 5&hw/ː Dy\qE&[reFfPa G EέRȰ, Ƹ#aREb"D4 +ɠ;t#>x{ZXk`.dIN}w Z]lu_f뭏sHΘqk*AH l=ƦРӄE\ziCLocB$R"5c=q"S4g E8m2Hd\de'+~Ἲ ɳYYfÂг8_mS}HT_ϷWHg2(q!(=OS)q4N[pʣc zl ӺҒIUz!&}J尢Y).RY 0PU)2J?}?eD3s(A :F!Eqѡ %#O$HꙄ]OEwŢJgГ /38`YKQ(e(H2?La?_3t\"(|:)'h81񊣨# ,wm~thOR\^5fK\%nNV I d ݞG&uZⰻ{Dɛ|y*L\5.܃sDz. ̠8 &^犩cI^9NR*6Iq M]&Hkr;."鎋mqUԘo, g/\Q[ev'm@o4ቇD!W4(m@3hG5Q)2i"zqtmbI{8+}I> G }Z|HlkRC#RТd*muwW՝i]SmX'{k]DnSyهGeE;cޖ'-F"L$LI3! c)wm<:9Zuɖy{V3mʗ*ǽutO֑@uTmo&dz)H00~ n0ilSPAz@j?9>xW^¥|0`*~3k*9 (mIffPkU |Q@Q2 }7N<0Jie" .$E2J~"[ġգXeg)ZNRukPl7"В?ZTc 0Ylғ)/Q)|`ͫȸ 3R Sĵ 1\^ A.07Xz0ߚ3 zaUӥ]1h#R<`C:<K+>z#i`ɮNJj{! )=,[\(wp)B5ƌƁZ1bRo#T+L' ac*,d܈/ YoVz1xw(Jҗk?o'nvloR6gi{i&ƚ=f 2X_?;,h3g0%Z$%iC{J*1nԛDER_Y8LoQƜs% apF*X+e٢fvZm "6'QYpEIˇ٬ f[ >zUxEQ 29˸ȧ$DhJi$cKdZzA‚0!S&qZFe)A@kY[/bM)K%_7x9Lt]0?\]nj:qKUtk``ò|\~rr6ʣŠ# q4d[yؓQfVk:\]rڹ ʴI;"E Rr VTD/ 3â -`0AP8@2NwW~, c2y{bˬ?Pl:!ęU\80O& XQD{]ݞڿjZ**sm SFHaS›@Cg3* zH2:Ǔ[[Ù.|ʴ?K!*kܧH:0.<9~goJA^:Ks$p1|1ZW p60Gٻ_fga%?g`VB1*7ge"!s$Sk+YEy L!t.Pb:f4 wAwBOսJ`PUtb !B) uDA?Nc,>Lq3RD?BxXoFGGH9 "©h^ǭ 9ڑ@RM6 ^8KM^fTaC63|<"T[yeqpR7#'S`n?JR}Z(/Eo^3Fq$t4 iFaևOŤѣ1-ӈ5JN&4j\w=( 9?C>}9 fIlo|7|TM/?헷3׽8r鋓/_?>;ջ^=Dtr38?F m>y}u[Cxb+ Miʸ)7{}lMUn@~^_~}ϻ#r&YJOp֚wug~ l~^2W}sPI,kٝB(!mČr>(6"~SLEډKuVh֚ۮ2`Wy^KZ>1(N$ 'C [`7~d{+Ã#])ʝIc~hFg ?v@*E(8be_K#akH Eޡ֑| V]L2&vF4F 6,Ë^Ky$"kY"&m2ǩY{?mNo-'{~E8v|#vzM$UߪJ&T``hM&Pˠ`ri|/¢\UU?@ $5 ;Wɂ4JH K{̵!,Q@R?w o֯Í|ӊvRLa"^. OKkQJ'+@{hǟd,Lݾzbr^H=7d}OP{̕)W1Mn13vl[ku{Ւ+ŮdG/n,(eX *$˩8boq "7Κg|pf&fTE T4aJE\*௒d :&(X[E*E Ra7d`åZPBqA +tuHG_)"26z^ jaU0>eN p)pgs4W륒9G7,e&}}>!UBI}&H/?:ڈu1X(NMgyXGwn(Dڔ-ȏuPL "Y-9A.v^O i]N5#`%p);_D-Aڣ)V\z\+b_?{Ჰ ICj:R FIDc2oc.RH^W 2ЭMR*7xw/ ҆z t a{qcD-5E\f7b^u qLsS{L7X>wmOnyw 7}ՉKxXj/Lq.0tRݛ<:ƅ|2liP6Z&d q6A^;Ȥ@FbAREv_ ιU*cS I z-5T/))w"yO%,ɘj)Dr1UE],gܼ^eG$}V8s n yлABޱnP ̯Ň=̥b9(;P-NNSa:EJi6G=KU#rr,3D#: -5 Fcw&(BgsE ()sx8dERR@b(.E%h%uɽ~q,U%*eqz**H?9¤õdJՆ!H! `ptWӇ"ZwOHP]iMyOw{ebٳ{;::ƆhU9- aa^B^4tSae9 N\ZPw`Բh,0w?j6->CBoS+8-Q/-xPĘ N~năuڋMi7 El^}*уq\CqX{x^}SHaUX#+*F*AAR\|8T~O*l8k_ ($׏Q$w3 ɵ(Nz*q V]ISzaip#$r!z NgT1HTE˵ c'f\8P>2).,DDD)X*Q*?πJ Qxd~(de#", _t!#pe KF(T&ë Ep&syakI8)X`1rHOh8NذE-Q>;#2D#@ n'4Ez\GPG{RmF[hRÙ^ҹʺ圳#xw봴1Jק%1*Zh]BZl(o=*k_Tw_TwiQn_cwaê;[K`_Xca;XZ![|, T/m| } &"sEcF ͩ!RdfoR-0~MS;aw#dV[5[\E}v["oLEߕQƃ YtB 0'〣C!՞V$G EmZGEdQ0A[R.k%OFrOY $t>#w=7OE&Z񾽟aPz]l-v17.-͛ļ'R%q&杂ZTV\]'km5gC !I7RA>yq5}1:]˙o,ﭒaqmY'˕!5<UxpB0kA'IDk1h!-w6Fw iB *;M=ɼIM7Z[CR"jxTq87Pҭ[)r-{ĭEa%;Lz [wsktf!FxxUd!,%YT_fg?;8o5~K)1MH{H=}Rik^(֢@( HNGm7 ,3A~64cQ{Z/Sp1H͢Ǵ }gGW#7_>^o%5ˀ2vhd.50&AZBI TH'K`##a:0- 9ڐs^#km$7Eo9yh"`6M A^x #K^˞Y [Բl-ycKY͏UXe1DrFlVGԆ\kBTL9y@؀cƼBX&ـX:!RDl_|JWp1H6 9 XBB]^#+mp{CŞ-i9’UQNAQ4153l}CTU1j,$K>P^yLQtق5BRj ]ZcmKAUzݹ|m=o!wS˒ӶNjZxcWBnm>E9 m͏X r[7ܺںY=Nm1mmܲ+nwzj|t;|P )uz~9 x.;l,t8gS*r psEۘ`."TS^a(ieMPH@NQ@MU@n?s혮OyXpy8l 9seY5Aq0qg.TLSzWՇک>Rc_U4BmX*vo,CԆUV[.a{\ŶrM¦Rrl1s',R}\'fCYJW|y :sEZۮ"D;nN!;x;6ie|1MU=N~W]^WIƱa֓<]Wo`[ԷvKtW Fᒕ˕+lU֦8H`u106z#}^1G9SThBJ6")&!@q6$s^1![m*8>vByqt.ӟHPIl{3]h}C9cuD!y ah$,?Q02_q/vR!r 0d|0 Lt"e' 8gS*G8#ڊ~7goSSwgEE!~^:Wg!?=_Z&&u|?ǝ.|_e5nO?N4GKVm5 Hi)-.Ǔin5"?+J ::Gj-u< y>ף/Gy{9Ϫ=r-f0 tC\7}6Z:/u}(z?寣."m}k_ {w|qzv|F!B'd=|wwjCsiJ!OUtK;Ͽ]8-?\Ѩzm эkZ-W#ٺx//N/N6cQkaNi?,Wh4MqY<7w߁]# NQx936]ֻXl:1ǧ-tQW5j׵Zw5n,Ffv}9ϣYeXltN-ω-:Oo섧x9C~ /?qw"GV G"'' }_Cxֺϫ^]Ƶ}^rǸwㄹ5fqr[i?Mclw~ӕdx\ W]iVa構LhUoBVx%ĩ_F|tv0 Om]>NCvl^/G7BBn՚Y)#)i;jqED~Ob> ]0~N"EnwN^Q'?fgamбNuQښZh#e.,ߣHElHaQXGMPAҞܴP㐼#P^S5GFN u3? lR%:AQ0J6zـ:د#c7k߃S>r3EX pO*y=mOVRǷ_hV|L[Js 0 e`Z!d|Tس;+JTh/Z!Abd1E`3)Ofq+OʒBIPr,P2Ȣ)#(- t,%yRoւtHL[O*^pr{GXɾx崿N2tgU:trўؑ™gcG&I^Cg M㛏#)PML ߐ!ej!_ElObIݑ[wKX l{א">-Ѭ"5 49)f0L ôa ϶O0OLcЬ&VĦQ)& 9'+>L$J$-ՊN['0/)()E$b1J/9M(H^:7ۜF wS x5%3Pc3X>xPSAGd@DEZZcTr<  ŀΒ'kaCZZ]!I1! ~d"6)kk *@)D/J%+]l֊b:gЫS@"mcJ`LJT QuF+'Ѵ*dg@oOxM<VGgߝ~L[j|Gs^jwLk;a:}j&ZKY  _Ԁ&3rJ^D*N8mIR xɜ鈫fX ^* {#~;7{o,z2#㳋:{M`ö:`25'}d@ z(tAPCT# $(yU/eK[5bVPBZq+JU}"XDH%o!ikR &Qsmm ˠMڇD1jM_@#˿lZ`,N0 6FWI Uyͳ)R-կ]PJHH!u+FY1ZB!홌X  RͮiV2{N%5qF{v"`H]݁(90n'zuDzH1ΘqkrʑAH l=ƦQ˺򊱬{*(Uz(q1r%RSM`] CT/IO%Z}|CtT{34!]hqI9Mj碅!A`ץ3F`.0*Ag.`Bb89<*HО6F$V$9l!HEm(e rfU  c[| h#jmd^_ͥs_~t)~寍mC9APmR֥Svд@CkZU:T9 vaE!2W\Je) ,HQ)T*i moLEQƃ YrQ9 QU VhJS8t@Is<9^hnyS*Z5͗GbHST]jT:7yZ}%6}Cl!Lʧ_bMS?/4;z玫9|]<{r:_ x$ )``j3ւLb2b=N-Q55`viٗ[UpD4UN)|ov.GenuZO{ZN QyPT8iP"1RW2hkCR"jxT~8!,Ѝ$S8[)r5[ĵDmՒ'jCxVW;%Fi"BB_qS{V+)բ[_prX*,m) jWyRʕ7D=4VGkQkJL'eQ[M*FfdT3Ikg3gQKMqT1(m8JT@m!GyRѫ7< 7v BdOP fh/#X/HHN 9cm4|-uPXBg Lh@L> 'H @>rXԒ:(˯i: <: DH[XA` &s1K꠱ 'zCi 5vLJh ;QLV0<,jUu.Q\όbl*hINN"$ HSX"dC9sV,*ap:&ni5Z[aFhw;BJD))%SbfRɘc06FImv@S[STz]崘3i0u)>|Z]w g"~4\ŤUwF9.֟F맇 6L:ތh,&98Pvy6xeͅ-fRg&Ow=6'1 ˭Od+p;RߦXcSK]~H&@"4m_kmbK 2{aT|ֱfτ@g/yonOAX*(1",2""&ZH0<)c"ҵo,E_fM wA9`p2nZ޻o]Js]"mW>+^cc]#n9KE*L G3ma+Z?C|55:aJO>2̪ɳatQ@&[20 yTOBp3֭>|W*cA$Acf)DxBˡXرx$͚((xCf~T|chƓߔO~-M^09lA+ G`|o٧_~#`ڰ,` p6fQ j}J?>cL4?1tab˕GeI\q c\^ԑPc`3A0Xv![nu O%I>ȨcG6sk%b퓒Zm߭uIek`k.PHƘ8P=h:FY0V0j0r-" .lL*'4{\?߅l&^hBؐIiSb'¸?[o_ea ].Rus>u* ۗ,rvO`xHoj˱Z&Bfcx׷WI-3=|0E9wU6X2=eJșxE_rj9q Kx1`jd;2{w2JvC%-/Grم*lZ@e,rqGO 'I iJ$^"K$pT TdE؄ A2 ǨU1kQRk^jF|R8[]I ZaQULoWs+TL+eaF3;rޛR@yTXq䃔A=rA".:E&amM<ekjn9PIs=K ur2mlHV+ŋ̰400K1(1 B' Ll^; vqoC'/n F< B V`:$gVqASUU]Wv]1g:X^Ц<-X[eC\S9'hm(/uYuKZq>Ѵ*m<'07 wKe棚į'ۣ8>:'/e^k@_~ݷ<[$Ǻ?6KE(uU4,ZZARvYEߤ\eה{I|P8MINҋ,ŏ(vn8ϚlUzFi f1;wFq.2`V9M-Xnxd~n@VSuk/Nm"Y/>&n *_תLK~rq>NsXe>cZq ~vtzWFߥ˹N7:Sf[XvFCf)S=XOKJkm=:OFР Q2,k 䘴G .dCfH }+d] Y]Z՝x2.-΋?`7.NkIsX&EdoFj ޱѢDYildthI4Hng{MԭIyYPѢbJRK١ީPj(e*N)* G!(^fUO6XlRLjEY؜:d)Ygpn(gj6|fE;:E8 AzcE(9FI%-=K-O@>c ]e/}7i'3r'u~Љg'U )t0z᠞Z IFe\EV ±ҏYDx l,j kjW}3AGGq|ϴNٸB7FHtI=Ƴa[p8u(O y6Ϡ#+nj6jCY5桭`V}Xג DNY@WE(& zJ1|$R\!Ѕ&REZ `/tQV:e" 풆t`ĵU͌Gt6{f}pɲWN ZNs$8>[R}@Z+um?8=N vqa糽f}O|;?{uZ)|vzq3OSW`6k_l֙EYkLpNbT OA0)FQIVJDem(=@ Hk:΃ϒNEo+!C(6-ADuDugp,WY$w<%x:=)Vnů56Ev]1,mg!y۞,U=Zp-e!ђ]_(G栞S#r`]U7qNW]}-t%ͺ^" ֡6] Bnn Vh BWDvkQف]Ja{CW.ؾUE+UE@W zDW A_JNW#])0++\RFox-rn|M쏞l- glFB+3ߌ'G!w;а"%`>'"!x}z;I+'ö0pPEtﳴ?1GI^t牝o?$qo([̚,շK6e@kq n)+ݗWUg~~KYp-+ `}_<ۓAqE4RАwq> "z֬, ={/=N :% ؐqFQEC$Ѣr=R+`gz 3\)zcW*\QJ5Pf3'˽Vp5*^ٜ*J]=BR ^k#Xqa:=^/.jnҬcyVs>֚9[^2NZeA Hc ]hS9Q&M(>W? j\__I-@:Wh= Xg'5-ɝwܩ(ҕ!p u רUE`֝Ł]Y"RGtŀN qܩh/UD]=BrdU=Zc Vhu(ܰPɭ6]ۥۡR-Q[j[ tu׮%AFU/tUђu(*\ݽ( ܗ>jjxk8nSoEzD X MW6oVtEihHNuM2$R3oƪ@T")׼I7//Vȉ!p;Y<*dP *"a0ޝ`f[8n\_fۊV]m+JplKZק%gl ]U*Zkw8#+ŭΥM>uP}~,x2yHOU.ڨМЄ"TDq:&BJ.g{^S@LYQ0[[SZikc3J\VR G,'84R=!5>6ܩ ez(EBlӍrlӋj`MX[回%qVB1d$2MNwϽ,/gޏOFa?W~[͓1'$ЀY2u4[]PsqY dpY"C6[ %VʒLb:9>ذ6κL6m/δזR{i]Θ<3gy.v/yeԪ'!u$脟+vYAHuh- sy:}2}?Y9Hs"O^]G^Js>=jFuu{7O:Ox9ψs6V ɚW&6WvHUsM6Kn6ӕ:״҄5!,Cn8uRE2x4UU Y lFzE oݐtHt:$:D):\]I&{BfQk!(,J$VJ:[4MT PJt>'Q  .T4("pgJ8BiÀ8q0I63/uJ)RaS$%EJnbwQU.uH!$1|L"h➆~G$W:Ve}RTF?)NJCCALA9Q/(V<P+"'*X_cK0?*Mw{'yS2`8v˫XL )H`Bʠ,\/6usu|}<2_V\`}^.$C0!B*Caj3ւb2b=6MVHKģMv8kgqY/q.ǓsSxpA;qD(f<Ka(*!,3FUJ_F<>k.i4bB <*8 7PNbXXHAllp>IN]5"w*#,|r/V%7F2#79Mo&.UL<9| Tgͽ =%jRwo߼PFߵJ.fZV@0T\-&x#32x5sf"(ܚ195zr]3 )ʬ U g]jꛩ*~8]my[٨AaFѰ%7hd@ar͌'ZK(I yI<1t$LEt!"'Ul)^&LC"88f \&Ga9AQNJ! =rA09$A^^iIBb%#1 )DX{\LFbҚ2֑%^)i/Z "}#4 UtQթZoz@{RyS# *z[%2J:T0'}a߸U(u}K_/9! 5B-CV"JI(3U J1JfSVq`rU dw`=/8L1n]CwD%Oa+LBN~R .&׹$?.뻇:YFvlVBZ]7zkvZdnw5|oB'M:&\NE]ܾS'bk6]s[hSoouq+ϭ ,B\f ͧg%q;yOQpB کQSJ=uyt.8Fh* iܨMϓBdkj+/=>5} ?[oe _ґW7ɇ[B rH# be}0z)#"b1h#2&"m-zɬ}4 J7^vaէ)@ӆi+4{mW6})ߛǗ=FYersr%T8 yӶLDX,-,xGI4 GrM2, |;ғM 8]l(VVn(E5S^완|,7o6lO2txӈj43IMZ[r1A[ոmQ39 +暪f20ǑݲѼ*-DDtңfZ]{oP^*7S$Cl+޲#T$@1JTJ241qi'ӡi㰡KxLv .7fWDz5[6x6H ^Wuq^*IԾ$azzϢ>dCQƂ % ap,F*X+ekg!:b]$ZȮ/pWukji 6fģ*[p5NGTQ*HbDc/2% _ގ]eh !/\n!FX#qJxHprF2AW9 :iBTָO)x2.a6O57&ΌR`ΐ E"6!zy;Ǐ 3&?AJƋ NAz8u"`ɑL2mN೨H)D0+tr.SҎnO(˓0Es_  N[lBΞ!B9mΪ,qPsfs;nWR"U굽kj۷˫K "i۪݉ =Rm;q6li=#Y~|n|{=h._f+91pGznۅ `<t=|rGa!5R\9GtU0ևO0iGDwc/55j׳ w7j 40dReL S|:t z~Mx_ugq}{Wut>|~~??a߂ ,80 '$Iu'ہCOZkm149m9crfnnr+O_K5mӝxWRc?Gla9hO(U.}B=%bʹU*U]1S&1[$}4!C2^KX8(Ӥ ۬c D߭*pϺEf`ex:>z+E3P?5ŏ߀v*ECQq*Ā) _K#akH E]=wdE%{Lёk E;F60"7 O N&ZbFTQ~e'~!I?5 u6ZNV+0`_kuZ+Dkڗ Wb+ XhT V0bs)F>Nl燬*O 2 Qr\k#,іDЌ#⤀1T)b+ ( lp6HF7g)Oc.*5XJ1V8)9[lz #fd3o{Ͽ9BY7/UwlFأ#A5&{CG`&(9f>]j+h8Dy .҅Vt2&2+)o3Kd\[lB7A0ɵ r B($0f4j|sW+T?zuW^ S* 3/_~]v=򕖷aM`0̠]&PhNIt|v9w#h)>ǥhz7fO][0T7Iϰ-sBcK$mAf_ܻy✥g5lxʹ-"-,|%wP5,MYm&ޓ5ԧ7n!}MݛzMajp)?^G!\)\N, yPg4F8NԲ.jԶhd7*z= 32(PF+@aVrV٩{Λ5{MBB / F'!2͝_*оt\- 0xzz @z OtNtr3LՏ 7>KŤ؛g| ay辮-i -8Ltp1^rﵗ;k `BK 1i3wL_caq_bui'?VI:a"$zXJJpp^d=mPV# @ai9I1ogrW"gsFҡT-Z2ҹU-PamË٪eh+SٱIb2Z¨7asQ0J `hpZ8 # P M 1 2/WNK SClPFIK+u0(x'!y ScIJ߷g8I|=h1ӬTWfD5D5yQkL-I%[yf%ʛ [,R"!2畓ĨG Em@GEdQ0A[R.k%!RHDcێQkwk!myqV.gJ)7Ⲓ o*i:5Nd<>>55{X:پ4NJK',;4~T #3NUb]ӳt6 }$XhgL5VF cs~8Kzu˖Jw h0.DOBǷLSL ^P%z7`oJ2*gb khp R?Ldi^nǏ{uβOEbƆe%N„.k:fé5Y>mc;wT"mbnv?sM %8϶ d5`[=z۷=^X@?xjv3ۡæ&oD%q&-R䐚v1bx*`7pˣNCr6a͕7EDǗ'U3;z Ee6(fgbet˴ n7r!+que3`X04yT}_>\*V,ElFdNo8`M7eiSglc`ۍvœ4zO eIpZZ:u&i2">eP'y. 4,,9̀s-S揪*9 Jovaˠ{zY .o ZrV8Afxdk},V<sG)o2ʜ{nrwS?|L J-Ѣ"u~1;˜,̱xI/^ذM&rÚ3n] \(΍F6cU"kV"-e*o/Um. BȌ2SSm3[(-v;Z'(.j-Sֆfh#i!FFҌQ14[6(z-HGRgLSgߪXh"i=)48},qoa![۳}S۞M@d{P'!ܑlOŶgY@73C ѩT=z!VB(վly/^=B`՚d i̧ _ԷW0__֤rT{sbbR^DY"΂yZh]Qg79&&M]Ryt̝3oD=܁Z܁m '#7,oKطᔒqƛf+Żky-ri<'KۛT<) 崋eWϚoڌ .GȱR E헭Pj!QSJX oH`>Q2(Ftg53෪AU+Qqxkؔ!Mwxus]R8n9visƼ%;f^JJ]xMd|!^DnA|d5]^0(e/ԛ&*e q_{IIk7@롷2) [{&%aI!3)TCZ:SA(ٓse:SQFx ZU̯L9_Q*_EyI6Ij~mƾO7 m1vk&HLuEF"@қHRv"1yIR~"Q>_cx^3Iw%MT8$!w8$,V):B3!h ,1u%h_U쮮LTWR+W h-]]%,?'4B\퓺uХ{0]{V#x)0DѤZ0 O:WN毞?ṭ ]ʷfzZ9}> {%7W{lo`QSKв1NfMbx4J$oH\u++9haՖz1+Qk6 bdơԏjr hAGatN? $NG-VdcgCW˜ܓ_wrr|873*KSh'D_c&X(1T娦V|=:lB atOvla7ףӯ76Ěs:?V_:83~J]|t8ɋ8ȥڽ|0ney OgͧG[Z2z'Es& д?iBm庝/nlW2+\i"Xa^`Eɇ>e8 b%wH0h=6^ |f{$-Dx"ciRhP. jG壋*7wukJ9R!6n@ ~5_>$X4ƥ\f1aYjHOO}g`Lҏ'y״`SΒ(8yJiˆ^FcD2Y+냉KM-a$EV 1i& ׂLDνolz+k .ո[ fВtA5`ƽt6n[>;b/^C&*iѿ llM LC 7Rnd =!"}X{)) JnHb΂wD#I+U`_'XఆOgIʶr}C8"E6EJ'~3j#9&9VAsڜIop QؿL[JfR6nN>7 l?}fw+4UV1Ku zuz`j≕.1Ľ+ItPrasǮpmt#Bh1Yxk,߱?ZȄ5bFtg"3Yxgj1Ľ,xwfn؞{:asb}f}EkSUmqۀ39{7gլdvC;yҜNB@'-}#7sy# F7S$CnSe;|˭D"L$LI3!_B:X)9/|IQ,~]tM~n~}V߫(INDž4_H?9\ooS;|ͯ96yo jVmց$HF@]w5 ÊZCdHe) HQREڢCZt2X9c A"I8g2V1XvBfR1G EέRȰ, Ƹ#aREbD h̝)&L!XrBCcWQ`Tkfn53 z4bb*{遻MNlQg:e:=W8%,ܤ-` t:]H9>MZtr[cb5tzL[5z6f&zh? 8"CDG0yT`}ԫ ~98Qp`HjXRD¿9N]`FʩRZHQp6q֫hVYr岴6O#d*?z&Ywkg"*3+Q*: 4ъ[+DŽX:FfY-#" 8j ѠGm7 ,3H嶌YoF)fƖ̶Ppf7)@\ܰ>vQOn(M/bgYlQ420fD%$FtY$R:"*GLlyXk^&ʡa iU3`!A2 ('%=rAp94^ձV^iIBb%#1))DXBx\LFbu,5,jZiԴga{Ph=gv^Cs:  !TL~t~}(ɔ[/ |)1F"r2@'RP;1Qۓ I겾6[  Y85!ntH dz&q븋*9ۻH3(I|Ň7>}tm10^O*+tvv:Tb R!;qMNkHRͺ6 nO\UJ}#L~(T]LnO)FJ:f|Up>V#'3aN/v;WnlDEf\%97nf5$icKn麩܌VO>}Lx׽ug@4bpN6Wݵ:J+ a#aoKd6N .%y`x9ar:+stoHs{{_<Ο`?"Eu'A{MG4s5 Mm49{]&튜]nh(ڭَJ(P|MD62pt5!"$u~jtkf%ߖXYdw+ȇ3ّf|P-@Ez~SN1Ń"xZ6a-{%Vc,0Jozh2U)}p%~ф+ uUuw:gѲqNFz Qo G)7"X-48A6 8hvY-cEs784n|]cK>QY:xzuèS0vg4:6;P0GjoĨ SpTIJ-Ǹ./Gq%X*S P9K˭.#i#2%Og6T.=&&VQRƌƁZ1bRI>Xp[岍P uƽXOZ~cB=hϓ4i3]/IT# <| GRgz-'t*gJy0 `4ELK/XEHsNɾp<pˎZwVhU*'k[EĖPƽuH ÌМ9jC|Z;yl? s}Yb]~&qTm'Bv+{HVV Ks3Sz?t+*4!2(ރBxiJ S8+FGJ[9ҁƬ4*kc&PfTAH F刊`) NPJ25885x%:&=a?Mfמ~=xkz=HiuL19Y9h#Eҹ͸ӝTЭ%#u4cäI.L6u9~/&y~1Nh Ej.)ê``ʞy|W>vt WVVxrnG7ڥUFg:AڔiU!hmZ)8?ϔxq^ z#Q"GQO/)H­ORr栴Ж1&1 ImPV#2ai9I1.wMNT1Sd9OK'EWQJPuձ<]U!QcE4H*`\` &_RhDŽfܲ.crL,2N&vSA a41NwXxZc4DqqTEC%DplH8YZ"$"MTR#*#azI_!(ְ='>DђMII==,l5T2*"#)K:c0I^VY숳󓖶0Q򹈱V.{Z1C'M,@vHPG Z Pwe vRIaIm&5YC&1J6hvCJ+@)$(YsZfxRIϵ` QsPNYT1G!kr G IJ%m3U YUt6'IE.21bhyI0,U`)40xG=x,. :͐}ѡ+C t2@}+:&]ߤ7yUzGO-Zir$Z7Tl|3px+h;HeҷH"YV=7O.{2ӤwxJth4, r#,TÔ`CJFFCˠu :yFteЭ9#"Ƒ 3:G+]<)p,DžX3UìȺgXgYiXbKQY)948)SBR|\Bgj g;^Ǿ{NEѝrH]*k2BE1[YKd9j/$g\tfXy?V^4RY%IhɔĹ`dbI!aMHör6Wf(}-L:uqFm1cf&$: eۄ+2}0E {"dz+k{㪟]9=0vN@2f#<;ƹ,l*'M Eҋ+@9N9U<3$]5>Im뼭U!v/N{!`^ųvw߿a``Zıs̸Nkء΄ױ)觭fZe|0QSr5$Ʊ̜Pr2$eJ̪AnU&Ĩl9ˣ`Łh$v4Y"P[Hp;Vd'ŃzuGwϻ-%L:QH/6zl>B2h%9sm"U_ZEN\1'u̓/6 U;v' k6o182/AgoNoU4)&ԤZ"CjR@kjmR@ gM I5)&ԤPjR@'5)&ԤPjhkR@M I5)&ԤPK6%ĔM[ik4mѴ5F;ܴ5FCJhFhM[ik4mzgFhM[ik4mѴ5FhM[ϸ;v"4tqu}Sfϴ"%#}YCujPN8'; ;_4EXkX=f5h HE%b``Q@"$^;ܔE!"]Z =Ȳh>+nF) w΅,-X&smǨp;F9yH{F\{Ko̼+ݕ+ݔewN$xNbQK|pI {@>'u<]{LϘE 9w>tJ#tށ0 9470{9.Ge+\|\V{FR.;>_/- G|_^7Z;EY׍ݏrn 4-LgB`8KǿdO3漊>rڟ>('t9Λ>r=cO'Iז>܀6̭}}2?YW~~-`Weh:3 σ%Nw4dqm]r%;d08e|ъp p—9tE+$ȬRb<ܕD ȑh-{`S- _\nG=.Hz%E `[q)} j`Dh<0ydz sQƠB&f5NVex:/=2>~LŅ0K A+/|L +w210?row3u~nge4>Z4SgHi {yB?߁ʟ۟¢^bmuo6N2{_+ls޶5,ҭ. }}O+ȳ*/C>.'KvNrP \Rd|Vlۊm[l#*'RRd0s6"sxR"I$Xw!Q%?]#6Jp)^~8m_4gS磹'r1qZYmC|18z`؅U7 vN;Z`{ ˠytY&Dϙl^(̕wl~Z*dh@|f,vMFVq~݈&P[=kmJY8HD4(X\ȋH)^],l-.|>AK?otM<3NY'2 Ų^%Be,KdOٸsI1!PdNϚbxH 7VH-#:̅ʬpvyNʺ " 6Olը7L~\֙[69y'0U =0xk?VΣUۓAb1<ĤZrhhD!\9s:]9ƌȜZ 2=؟6f"B.g6EMI 2ږpvЧ襜V{B*BlAmrDnnKrNזK)GÚGw.Z֊$uviYd6K_1z x*IK4`\Ntax;-#Tj4d)F3aMU(U Tv 1N> Y $#uA$,R<&4*#oI%`e2Y"ʑ(4әtҗuiNYʦ$HGCoTmY'f]]vEC4&bȖ[Lg3r✉D pZI @'Y>s wt—(suz !ӧ_g8!|2J zwb[Ӌ! > Ek> =Q7% j g!9cl0x]o[DZWi H}0.R;&A_n ̾,5m~g$RŕHÊsxvvofqt^ 5C&Ge]qL1L읲FKe$㋽ tٵf vUs %t S&0cr (>ۃϹm?08@Znt18^w_R:XK^]47&䞡jbj9R,~|p W4{0韽og\f+V1'hNV˵]r E߾X4|ȵ#Z=ɺaa];fyCLp'ZE\ytrtv vIu\5uRҜGGP}>IQ`4=yn<-մ'9}pzyNi;/_|7߾9˃7!'ZH)%nL/ `n ?ahWkho>p9قg\]skƽ>q{c(,Tn@R67hJ ?)NZ,A&6?-r3SkTn$r8r3Yȑ?cu>oIܧ2l_j]E:7 VF98+ohS9}q# ~9'<-SFsd[+U0Ne~FӿmБNRCIy6HL-4LHϲϡtOg2x XFQ@X|axDd: EM:QUڱBvUjb~1Y7ku12~6kXZ)&HV߿A Ror+C w|y JFrꜮ/{Hʼn=jU=Ϭ~ (6A) Ռ7Y_sRN^8/LAKM6Nh=GK%ƑgA\FlutsC(qR5rQ@ڥy-ɓO\\oaM=YY'-lߟԋ_ nlT m q)4tDjb2:NspY?*O&rQ ^D`y8A%u^yeJgLMj7BI#o R*}rڍz` 6$:ϡD8& F9T!2Z[ Iz!&霹,%Xɵ9z`40tyjA^4(&4a辢3ߓNw WM>uGȺW?}7>)qv<$e/.=4`PlNziӟ{82ً-?n; lٖ=4|zۂzjzQe1j=#/)Ʊ;vn(_x|lSXny'5NޜgZ#t2_lFuNO#&q69u _mDqͨ{9EOf(+ $0{>r`z= =* LxH}yR:ѵU:*\TLy,veL]G_u0}ٕ{;`WSL9;'S>4kdI  Q~y߇z`ecn4UIN;9gACb6;+|p),}|.7xgߞ=2 *'#Wnә{!UD# Seӱ_⭕6c.BZ˵h˷ۋ0ZjT>4|FeJ"% O2Dig)9a.M3TD\ C/g&]q{C?{~dO:pEҕWY!%P!J@Qy_rA`@Ud(iagzN`k>N\>N]oT\j=՚:}=MdZ/DӉeNRfMLIs;H)v68ە>9-f%}z2k@-_EFIMRFH12bTZk"E2L^`4y'P>ҧY;cuOcYcx QcB_r>YshH$Ƹ <6 Xۡ_EqK("Vq_npd^U~PQtL8rPQZ'<:\" $.qn5fu?wW Vϼ Vh-1xQj :94:sVcigI1rAYFięTpB&2'l21CYY`6۔'|V{mΊo٨tyW"B\Ρ&cb .Xs d1[[hI >c dw_ {mGfIe:iL‘aLJQ3eÔK!=DԪqt-pjW%IQG*g"I0)Q7e0$[iةpj{ \}t/ *@CIp T *JX,/VJDKp15n!J}EOvm2TfyU Zk ycw<$$jgd1D-S1(dcƺ0:C2w z>O;WDt-/!%pnb-5ZE7{+xdY_A)N )im-$ZxㆤP~![Hʚs62-\r4f&RW#KLGHĖ*Sˎ3CI<8Y)Y@Aa^7ƩȹγOGGbGK(ʯK\ -yIX@ ȅ>ϳ.sTNX$[F RF A٪(QLJECOk5`VRHw@%Fqze 2n3R o`mOYHܠwBَ|%NvdDy"8ƹ,!8t<uq/P"Ӵr =N,냀Ʋ>]fbYMXr)=-NDn]Z2%L  #X0hǨ{|1D"D˨i@PR0HЙ Hhe62,bf2BF= Cw:}bq?qsd;~VNBpzϒc)Gx:緕$%)Y*ނHTsZ$r}zt:9εs1I@ dRasBn*hT9)ȹW]Fgy]|g__uxd5Ʀ[fYnɲtB"ioY!`TJ:)w0殜\bK4Sr#--?Ũu1ۮ55%ϬѲmHk;H00P߇ )ǥ8VT[Q*MKbS k~o~G%LB}r9kkcDwQ8݇,`<xd}b[n;Tj>J$?7Uk5VqڵJL!0Er =ܳ`ꠋظv3NNѷ>9|LsOhe"l͐e{+d2ZԡeQj;ed"z6OXDn̅Oya>?AupU)CCEHUj $Itp|wo}f}2eZ׷;x O<MF`Tܜ0)2#sՀ 9md$UƱ6w5Ūm-$K1̅&äUqykgig*ߙM>tA_NߎtUhIs Ʃ =bءqU!rE׾dTLF@:RԔWhH&-Ӂ|[r݆7qHPh*!,&'hc(UjլRb:YJ, j_|,5)juZhwXH݆E5(e/d<Zd _ތ=U;RYQe~sG\orv=ˋ'`o'dzwn8#l |X+)bD~m6WgjM},d(KdifӆJ4*TKZn\U~H^lq.jwZmj/`WۘJY F8L^| Gև nr_CLYvEgQlMaa9p%JEeqYUg{m8/ˣ5x0tfEahщ*թm ".-CEQl(+Vii { yM7qTw1k=[^tӛ6݈cZ?cr ]7?|xs独-zr< f[]_ / p=y-OF4l- ֔_mV~ީ~00wl٤7޼3-_dA|UW+-<NgORI̟eX8ϗϥ~Y)!e,CAR4qA1y!`-d)R*ob -%(a*8iAh0QsLu>tEV;~yrwZ++Ҽ<4.:O7ˀWu)ew`C2&2;k23R|6:S-o1{l( 6E(>0OlM~\+T˥w~q;L]]ˆvYNeLwRbYPVF>8˂0ٸ΍҇_.ۇ?-*05 m+]$6ʽ4t/N@I3#yGCWvfC^ ]Wd}92.̓$P^zxGmQ|Zj5Q;_|h#LhJ>h./ƅ/]2in7]Cx7 8lzF> "-":buQku(ˆrkT*"cđ!'d&*!Yc& j˓O"x  MktkժR\QYBibPE-:N8( n2L eR@<]oWާ PAWAE&f|qur*:VyPK'm#"cࡥ\ʋu*|2C[G(k0bJ'B!R5ɭ>iI*z%Xεd @OO=nؐίE;Isw3n^šǎ06!;/5. HbŒB`r:8e'2IçCI`AFN#jB2̪)`3 9TX{Y0=q5ejͺf0\;c ))OXច~9 4;wNN ym11Y1:?,s#nR>=޳A}:U)CCEHUj $D9΍Tǧy9rV3ˊ@bfRkt~}v,\; 22縢eA1FfdP@ [-D8֦ƧXE7Vd[`h?Of>]U`?Twɇ./%IWT=lYbءqU!rE׾TLF@:RԔWhH&- Ũ wq gO%K8HAr88䙇8wUCg;#Ô9,36Wyڦl)+ xj7l!)0:""@ɇv]{oG*GCq$$ 8!S"%[Yw!EJ""i]S_uWU[BJ1!VGp#[Ռ0V>Z7`&zzK%bdF/Ha83Tۚ5rVs)Jى.l3ԅ U oeCj,|qWů~W-a8#@)#&Fl@pdQG]d$Fn%KWZrz˖QB3-CĥYs^W:Ϻ': ypvXzI5'ӟ4]5ۼhm:|7FH/f~-%|G9oSƻ@,v\;$IO%Zu+>_쌡$Nh H%Y٣ӗ}F Y~L~ח-vWЖ&PvAݠpxn9,L[+^fݛ}]M'7{)7XHtiX{)) JnHb$#mM2<INOz˷0wF:-OZx֋yb{a!UԘ^YXG,Yb$c`-džmynk[l3[&^\5,J1n+LN;6~:*ؤ LoVtli*DDC_뢯z&64J76P*Ut'ޖ6NZ}r`12>JEL$LI3!̢kzջ}p9~4 D$-e=~:w`?7?Ѷ۔fZC[]Yhl;#E߷@`a?LPV]<}TכEǸh#ۑy,![nu O߈vPc:xΔk^*8ʝ7\ \cZ 1b5AL#,8ZI*ksWm yLev+XQ$}:*,-Ye&ޓ5ȧ7n+!}>=>o",E>)\ Gw, yqKGf6#W\_~xR;A#Q9[5A.J@YYYișݷ.i>(Q|`̗<[$uy4ܶmE.˛ g"Z4ҸfU_J;`&OJ#w7AQ)S {Yr32iFYAf sgf4%z4wbJ7gtv*퀴ukKZ+4k~M\N"6 m6KHsoDCd%C,:=VnL,z̝vX-"ro0 !% 3Bs49d$zy{Iz?3 }Ft*XU!kw@n{\rg <|Yӹ;rYrVZǫAKB ?k^y\h9q L[n Hk xI6!b ֢o rWyL,^dÃ]f?k +jpֻZ,rqGO{qO<8J+% /% O8jm#ap) RLݱ)pewq~6[  N[lB\:$Uh W2=%Y<8?KD>t|f`>t'.6)[䷺R #$TnV0GS#XJ5(qvGè47lnNujroofbdJ̕~Z[vf( ?݌8w~[PH6⑮!h8,|a0,`ZdŇd1{WiG%hI64WݏZӌ040pR7EȆ1KjlXwsG_ҩ7gw}PGg޽C۳g~x:{wg,_`@F$m@{3m M04mκ^UmNaGtf{njrKS͗o}pn$C1(Ӥ ' v# ~ք1'?ȹ].|[6J!۠LnL )&]&37'@EoV̇G38KcԺ SaVeT?6ow} /`j)JUA{ג)Jh %+ڽK&ov HEa '8(UP0Pw($~PlU"XȃQW\)E]dUwQ]qF 3GS~W}?|,80@$vʉ<ۈxQyGL&&xRG02 A{33`C=2`>{yٗ*{Hv&8}-ɷX˔v:Uůb}:VJ~U"PB0`Q:U*I9$ 4e#,*cg?HG16)$BH=(5g70z셆ai#m}ny+ 17YxrU?ҟ'*$8B@'H< ;:;/iy? ̤r"bEb "J8,aAa<"AGelN%K{$GU!f鴰'e(â g(25E(d7L!-3Kh#S`T YpH8{V#r>;u g󋛕?v+۞R|kB?CvQ)_*\lHYMUEY oDIPki2#5% ѕ($m hHTi`$7g?d0;fWޛ׻pU 䆘*cdyRKlȒg\-O x鼸}ђL) )*LUD)qsẍJaGT .l;SԆՄK E\j.@9%ףɡ؍`H-mG1젦*_XF!*Uݘh p^K?5Dl&nX}H\Jd_.hz~qsmE #uR|c 6(.^;M=i.rlɐL*6+b2h,[wLjv6X[X1dY;m\J8 @9bTA Hڵ!AhrKBQ53(KI F - vL=lxxe\[oO2Qj\%5 JJYx i0}M ܼ_9=TK͓&.shDdCD/%;aCP bP+&k1BfP+]M8iS|ʥl%`&SV*AE v%u.wg"88A6ӉwBJGL/ZW3FW'cl6HЈPKgʁnEet Pj|5;n sJǘlD0>x|Q)Jv5k|v$ɂAaM`F )\6_&dY˂"G R$ᵯǷ#0D(*Ӫok[,Z?tʢ*HYCR9k{,ƺnZr7^k/վ¶WUwzI(j生ͫͭ6}\W8/tS(YRA 2H9"&HUOWGORSu1z3#Ϧo2vLn-y8LqON͘(I U$[ꢌ Eu ŀ2H:b/$M.dIm #8ѡtlS4ne-PIb+muF:!5-j&}WB˼f`A6=L&;ب܀JRb l@K쿔F'(ض: C +VmZ=ˠ3٫=q'B$ebZl+=.rV8{Pv38k˶^{G)F&J`M9ͧ\.⍬0h!<$ VJQtųE$ 1pd3,ՅUcL׬}1k+8#Qq[W45C>yI A=YV12kIGA2cc3ܨ~3C%$! I SfApנ:[iɡ~qEG8AÝǛ޺UY/I7}T#6̾X?"Mc3:gSx K~CtnOw*O:;gtHFyc2`.@:r$Ut *t! D {fZ}8ڰ\jH˫uG4WJۣ/'ߛ)6~h0|h7 n7]ʡ-V2fﵲlJZ9 jq7-n{uDu5k[jzy7:x1pߣe>aay:|Y>ذFxeZ^NLTz٫ŴS]B=kJm /pl4۶qզW2]|^nH^ˋyxi!ˬw[fYyDPc,X Ϗ,jŽ#YaƹaG~ҵ\/nV&jxdVk딫 W9q*+eTUf`'mt^SQ'R5 :夌cy{g19 wQzvNtCҗw2Jx]$L`Q"$HXQ[!Lr rQnV0)s{rvGh`{l2s@>z^gWܮN_yFȥD?=4rOmzwUeS)I[y!n>_pxbXV{h{y։ʴ?Ԙ |@$FP 1Bf4BZ@FH5 M mlbD -U$e5BVTЕmJk IG(YHF:2䌱I$QB"Gy煦+ٽR{C顫ԧ“RNM~+_,x(.ҿ/ u~=//W/6J!}Z[/WldNh~ĶTs}y?eBS N=e˭6oA76݆${oXyӿ{}g^'XE-03Wb|/8?[_ϡ>Y]N:㶏*+`kٷVlW?N:ߞ2b *`R_}$OJ;H9)BA-1nj@vz|G@zAc귇4eoBGֆU7PO,(ȕTf\4"B]r\7z20n;Ǝh/``*mX-A-d%ԵHJZ I,ymqShQȦlkqNZT ; C=tcR @e/g:zvVz Бuqvjޮ:M\SSeiWnSt5ҼB:LP`B BTA=p,$SJ7$ ߰u0gs(Kj|;:X3dlΉi׎k+|A4 #5=yA?. /PúT#zuG)TA*U K'Mqk8)\8 \b1ѓߋ'k\iGj)扊+H$GR@qޖXu?KRɑu\ m?8Eh󳾂b~W5ΏcݝN|y{( U߈Ȱ>Xw55m/͏>T>Neֵ>;/d>$^߁7qP>J OF 'q磣[:T[&#JUk @QiAj:[p&QK-?%t{edӅ!mcrmSfi>Nea/.8}Jmw:'^{zBC=,b8%5}X5iFI|kVlq!=2~5{a ͽѭ o ?ƻ%tcs,"4=`H?M!L̏Z-|cty^%Ad0}]#$ib(iUJ)L0mZtʬI%}Q?6G_q.=Zv|FrdcPդPTtkJ!+-M+Ouǩ}%pL 0Jm[/ O?x\ ۦ"?Iz^XkzDw;)GqGõܑh7E2ا;_ 6UG ]u=t('Ğꋠ+re,/{[dnVۡ|ht%+yNF GCW.cVUGi]}t݆Xpܫ4b_M>(_t3;yVX;0=&g32UzJnBwIvB*lmDlQ.B{𪲣FLSv:_c_LdK:zwL q;>Ralrdc@\ٵ2ŵI/>f2;;>ƿVZcZYjFc Wnu̅PŹ9qOa뙺ۊ0w89hh@E~"@y|0,~0 p7jyhHydRN9M\[ǜSQo>Gc^.ÛEMTUw4룲YExŵՕٿwW7v5c]B~ZI-ob?%lrf[[T(fK[꫑((FdH5J&VEJwlߗN1NNaUP,d,!S*: BG|w[6(zfq36®6*0 sT(XcD%%ϑ1 z-ڳ ,T{ɫPk,T2&RF )ӊ-`bd]v-֛ ֐!$Ր-&Kƺ&9.Id%&AIIu*i{,u [ Bg46d $zy[+ gT}h98_7X1s`}Q_n2eU ڄnA26G!$֤ t*<.YShxXZ\dgg+0|m<'\B7:F$P;A%V?{!Pb6lϛTԟXb9P*mj&xjւZP)jR;7:|_ ~L>AI`kP UAr>#Zkx) H-E cPϕ`?HQb69T\*c YS#[Šf\ Y<7FL.9 +" Xt:T!ё`*͕Rj SYi2](!m`0`٥^ Da[PQBѩ m>pCsR+/7x(Qj<تdKJ4A2'rcaF&؜uT6 C*lThs]V0 ^6gFV sKƵYhXJK |@E 94DkjAL-꜇{A16x X&Ő0W KU ƊUg c< l3҄ h )0&1 }\ V͸P ߁I%H57]T?Ai 2oŃ1 ˤ^I"4Vq̒5eM')iƒ Q !*?%9 0+CpȤ`PgžuY" |75D&/dܛHS1I(@pseĸAj` GY&Rl i+;goYdW6!;G~X5=( E̾Xhp-!x^>:]ˉuF!4<(1FDYmCHvG:h_CY+c꼵 F Lo`]^VhG1.P+fcEeQ!pB`뀉W+c P d髒ћAhKрrYe ݆)HyE n8$_<*X va-#=%EHiDH2 # /C VLta,;Ѹ<{*EK:BzP&G͐ B@8΍d!*T?<{]l~'+`də0X0X o7#9SG̕8-`I'x__b=^#EXFT. }PA^ @n@E. ˅Bdz`!) zg 2koF(o IEzhh6 uc+@lZ- L۴TsmL8P1`0uq&MV 4z$kp0)]qo )'AŬR11PA#ZthA)~ȟφзUfu(5npJ$𐗈5aĐ 9 ]nxh-:;#aFuJDM*)JHH:(- = >Q^3rt XA7nɰt6Y d߀qE#XťBi'W W`"o/.zaJȅ=w(iQF޹q$ g/GCq9ٵIΈ,[CH yTNFlV`};Ax \G@cmg! $=sr-ؘ\OD{@Jȱ: *Xz䃡,g&IS'6$@8ɯH4!peR?i>S]2e,,/| d6ܞèB;Pp2UVn@@m#Q@: nYtUE _X~.DbJp0z5LJ,NsO#܀L\c!p'w >ÅLva4FNuE_KQ-E]6=f9D2>v)zRH0K! r@uOW8hw"#x"o00 a)ʅhYG`̄_VX Ue2VqʻB4X 22ۼmλ?;*/,ʓ+TT[ ĕ RI $^" LRH! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! rI\pW$@:!@Zc= WE/$BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $z$dtH X;@\E/Ϟx%$^ $($I $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $^, "fH,@Y\I,HDH&BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $B@K노T}6̥Z\>RFa.AJK .{.B<{p .pƭ욬]IuwR*]{@JHu]Dw0֟&~ȷ?~wǽarAb(e (l9_BҪ-%\,;_aKtE4e 1֕;b 'TmSM&Jvg&!yu+Cdp5 ?FA3%\zu)l T(\ EBJ_f1 ૃ}} ~pSC]R\@.ŊMՓ*G] V!E?d|K ִf}h9V Bu' N':![~|U׫U<Ͻ]nzbԯ}9_atj\k:;MU62^ߢENB$x>aYxGӪO!mm-F&9MtŅˏ6w{l& cmYZкܺ[os8E.hGZвhun7 ﭞ byf}FmcnGoay_3Ϧ;:.Eq͉ppˏxҿLn_⽮:?fK5?oen[hS\oY{MRd<%~H.2;~ /*/ &~r_&2CJ[> i8k:[NQUa_C>Y }Ikn˓|j/ky:2 yg.=(l\qkJDqۥek_\zm-׷.M{A傦$ya4GV|knˋ/ _}nntc&Oh 7 sԾғpB4wʴ˭լ.jxr=K6.2X&&-Ůذ(Q,&nEvTzPRn׷ٶ mit 8|{N^8zlj,6?7U"?`co hَEhGw\oqm؎tecb̦Ej%W-U5ΠMܾbJjTqஶwqo6TZ]D'}=4jFn>ӣϲxP(jOvg-t6ra(%.ڰ$kPhgo0yĩ 6;O˯s^v<8&4]<&ju{4>ݼӍ9^"߂dZ_`ӧiZ- ]N6B%)8ΪloҖM; _FO}K R s<$%\cɣIO2l2 -#Vy(J, Kc$<؍f\rc 2z gsq@ uCOgzL:^ҩW~u;S㚆9c`ͬk)cޖ([4+:p2{bIQ'jKYt)NR)\ Nkؙ8w{Qʓ4ま}A_so_Mgy5>ۓI&_7(`02O߸LpX+qJFD Q.x,Ɉ᥀apgmr^t[2a/Ctd+q~4+sڝiC6k7I1+}1HIM*H:/!G!2NeA$-hЊt6Na)vIdRYtI3AA> >p>2Iuتcؙ8w*k1>ؙ~<#="Cq*eRp4䤥+5 JAId<%!>BN=5 - 7S[*Mynd?#& NQ͓ 8&@dME`>=bg%dqųZJKE~_DIgv`]m!%Jo'y0Oꊠ\;58;.MRu_˴rn䭛|0}t>_=J`Ne}#e+w? hM}rKc,izm=׃W@8]a.|Cu*L՝7GWv~~3 zo~y׫]^zKYWM{m]fㅨ%q &ߏZv^=G-j ~ VR e,p H EIkvYL.i~|;-i&+Ui,gh v02၌'%LeL4ySqY$LYAXM*j!0TIzQ>piԚ(c$l'763sTLLZ ɉ2Z^L- zG"8Y~(`jP8}~x߯?ņCey͖rG7w6kZei$&K$$M ۼhe Sp^16! |E" f\|áB#F\U LĴ BEХ6T8Q M!F2[nYw_{ 4 Sn~5mHPL+ 4pmHs1 FiTpGE?OkMMR) !Qu'* 2i PK 3*R `g^ K [Cz'>Lyޞ<缪uurB4X۴ c%??GoMZT.&kU'8ΫEG4ǽ?}蝹a;9jD!㔐jmО^D|H$R$/bC.)::aPN9<ՊsáLҹ\NͱOHr SIޥ̶I6E\-Bx'ճI֠46ET}ZO#S9/Cw*'RnoM:>>_~TPIbA}j?Ԡե3'rdFU w횚ֽy~"VKuGVMϚެfkQF0Ov9+DhlZo14|-i;grNOcMiVmLQ`(Y|4\zٸ evOv_Mʏynԟց Gu󏟝jzRFe![hGyuU?_k!QY3?#Uلj/ ^Q~SV?:&ƃGxanZHt7mp$_? oCphJVAv"cVC,Zqm+trR֮1Qj͎uYT@gD5yMR5<(tmYnw:(8CJuQ'g%atzp@کH;*ocZ}P;_S1&bEChӄ:f}]|zȝNjW^`+bB$SCd C(hx8~v5f1rl:ZA uJ7ejqeF1~?Xϙ'&ftQ R-rۇ?bs6}WS'}6nUQ륗kycm٣f8'BG-CGrĻG2VUTG6 hCwz@I2tkJUM`[%$4g {3vdll.zH9?1h6),,<]VJV`vb-]^qϠ#v(L:A Nd?_-dsJ'ղm"[{"0@Po!t}i٠0~<_`.Z)USêMu(]NFS ʕ `a2*&1kwZٻ-[mU[BY*˒t-"9[\R)pEC-[FFES+H)i֩hCUC*Qʊ\肅VԾE-KCFVf< ͼ1 .⍷ae&ηUi&G͏sψnhnޞ0g fOW 4F}| U"fNfLJ}oCV町 U-vu0ea|Jc|0G`^;i{Ρus6cڣtTO>n||6ۼ.S[=C@^4tJkBCb(RRA=SNw,xPhoPh!:sm&(B( >ւJ`Q)rE@EoWR@H82G[є jPઆ "p$Úmb*{#)ҳsw<Ҕ{x6"9amq OjȥN׬μdžzǟxݑȸ<;\)¿h_dB6 lZ3XjIyIWAۚ(8hB:Hl2:U@fg0Uͪ>s3_vx6m@NjH:Ųvb,H7wP#ةP#Q{;ݫPh{ {={7tJSPa/;d.@]KLB: }dzvu<21dHh2W%:sN\.Jl֮ $cU{BFU`N WP{q9C*1{r.Q17rngW߯r5UޕϹATC)6ѓJs_:1B$9@Tde-S=Nֳ3jTU0Dň=sqR91&&a1jZA|r8It}r:"Z*XjUJ (;Uыl~*۞x&CM|c**@ctڽ*ψe)ʐvB!ٍc(lCɧgAQ%')o/p%ۨ)ӮpGo;a{-A?gf{i%nushȲ-\n0=QlA`ElYCnuý-UFUÚK\Sgfg <!);T%ɂ2DtVR[7Tf}N-"Eői rm T*U7B ڂVYy[`|HV0),`#tMgO苜q꺷p?m]g nڀ3y5PJ'] l(.BIR lw-d])&EFXkkcr0ƩP6Jix7on|B'rURTcn $QƑK*qQ$^cj1CǤ|$>z0rhXv@V7 %VOۊ,V!ε TML(\Lul  ثQ@{Rз2]@5A'ņ}Ҷeu}W썜YCPϡհw9D޲8dqH A.2B@DAS( z¹}X*.Ły{az{&WY( [j7/Ac=bUڿ7  w׶:ϞL >T_;ȵt(M>C7zA6]_z?U7QRR)S kN*'٨H+[(+мq(xJ߂V ק|Bޖr޼A/ry/JɪW)*W`:DJh} =`,$){EWd{kjFO1IcJAD CmA^G Kq;YoU9XDN, NWLh琲:I#c$E'OjOw/0UR &֡IV C #W"g%|,"e;z ϥzN 5U}j:?{9OF]~ڛyL-C4{{{[M*+*^J{mft'E{Yɦ1H 0Z--ʗòoGfB>fR6 ^3F"|??OgÜG:jbcOУ.T57u'wyd(d;COͫϿӛ} o~}%'?dN86[I{ǽ_0tkho=!9Ղf\男{}Hٳq8 2e:{iSS4q\M\Oʨ?"d -k磼ł/Pߐ,D戙ʑ`ဪl?5Oev~t/ⵆպi#Yܴ õq D~ ߆Y1 Ԫxf_T*NdljoBj:9S)kǑY5;ZnVfRD+SP7IմǴVD]xi6LTkE{ݟI^+@IXqr&hQ5ЃNfmOVڅH!*%%ɲ7d!iINgk %X)W'ߔmF Bj8;#0&:+L V:j{-STLɘ+gju ߖW8,J8aL2_p TւqUY޲%͊R*IÆ'=tH$1}]t^ -W1y;g{X#}.=>%~Ov8}xK:`ݛ 歈&w좴}wIgQ#J yzvKi&B,g> uQrjf2eˆe5l`&u/!ul8SSz!9Cqs/CoeG^!jb- 4Ɯt-ZlX8pS1gPlSE^(w/dl}~,h|{{ϗkU ?zz?~|ig\i[綌'[>ۮ{wg_ Pzb{nnpY1qtwCtz?;,X3~}wT픾_M-%;33=3-18-JinN*">N%LA-<z R s>?~Tum^D1!Qn$FTPȢ{b^hJ\%R8VHu-)A"H,ĢkA5@i ޼66/V~,{ޞjwAF˝˛A׶?yȫo.,.h0Ldff}{ MI|&\Bˋ&s8kn Tr P fGdr@GD!2G!wB$QG=GKLٸΐ@*`pH>?(~IRejZ Էo1.sIGǗN1c5CB܋5; zug)%"L ] ]DO%؊[!Ut}9R~}Y7SZK}Q|&!r$1B-boaj1YP"꿜 <[0˝sяɂ-90E'%f|H8j(d@$ٖT@ 5]`o YTwqrFq\ךɹDZː6ojlpv;p_vwDRl f5AiD]gZB5 WA g+J0H^ODKuHE%XH(1j5N&"P;0EW3Oy+j΍:xX]""p`aWI3ҜQH{bH[lqTun͉wRf=wS@_B 2/]Bb0rm1l EDtlBZW86Ek|aݔߥrGRr) ֐!cb4L5s)V`gg`lbfs#/.Wu?ᝃ|:&Ӳt=Ԏ[r0'($s+gf*Bz+PtŤcGi\ VBWGBt5ʠ7Fd;!7a\Z BsݰlfљR 5ʆ'P! (=H-ThV5CS bP"U?V5XC&0Qfs`@%}JB!X?,bZ=l8ij T`h5Sc$ RKCp ڥV)q6W,q{SUbۃI,~Q$()TVt(ռb5YM~5Vcڰ Zu=P:m1b* 1|KmlzA.X$$AR!Crh`4l8ہC/tLg:3/`zk@耂:_g3OӘU5|Gdz8&PSX&Sla~rY\^1Y#eGR%饧]VJBa.PER2)h K(qb]p9 .(ܪxĺ\ Ue[P2!Ύ Xi<;=+˓׃7]<&;'t_x_}Y{Mw/ ~OLp;j sfi7}?}QƦ~kIXJ6*I)elȁjsv.RUJȌ>Q=l8-QuLv*AW.ڛ4t=MKRWrLc(F;dδju@*={SP=2*4T+޼Uf]LYooLb;#! H+~pT%TdcuyRW3&_mo*9N?8+Ėfc rfL/0DԠP39drywZOXL@ҫL8^!Ч}1CAcI_\iM 6;^~FygEi_W_>˟VMWNzc,&1$ IjS'h$b3I$c6~jonQ{&۶ɭRli1XLQyhuAzaۋߥ'S{ͺ Oif[K>r;~+:Ė7m3-TDQsj%֊y=urk~ꕷ'VTS1ϖŀVSL +s`HE6SƖѿ%̂h V]@:u=MYRqm "ɦCuR=Py#J9/63  a{#vw߼XSduq炮xx$&̈́!ֲĔ+!D!+2AY4p3\s@I("KdCJbpp%)Dш厚nGtG=(uJJY*5JbM8e 'v.ۋ+{ez{2#2 +HsV Bl HqHuAm 4ZfpvfFڸ*9jP6`!&#-ͦB e0WARXKs|8p>x-&,k*dVGΓ-$goR4$P4sHvcK-yR {88X?.8]2Uxǡ_89}mKv WX/֓^o=Zo^bKm6ӈ4jI[Dvv_w~}vk<z\\_=2wJgםoE۩ys7Mw4]sE6z>>e_2[}=t5ߪt-Q+yS9p E7uUo~gJP KN$3؃!]5uRec!$cX `,R1R26dY^`EB%K/WS76'K]/Ohq+hѦoeg7 `kN]7fr:3 e{}._O1̎!ԦZQ&S:!f2 ɱ*IRK^bqEϐ<@9"i}( EeH-}O_?#Kf1EY(hܲg~ubi{OC ܧb}X`ۗ \,HJuc7M/5;'cٞqǓ93D%_|lT6hbXr#>.x:CZ虑Q.s4jF5f=z |1{^ grY0IҎyح}m@Žv~T[B+zR(<&B{~ `% Cb^M 9;+ZƶYd w-G||sBڣJ]R!K.^)K\).ϕ2a3.ܜW<-wp |Cً {psO?iէ|90gBwD 쮕4u:/0ݥMfaKy L+$h˰W>[(5U~Z/8 ɱ|3Czw4,.7x^/BWd !svMgڛxt.f[3?~WQ)Fq->!/_?|R2\ʩYi+5q:orMYwP z du)㼚ME19Wv\,z͹XźqL-rXn:.q]CXld]NΏr`98:{9>8v}"xew7?ysۑwbl~>.xzKϾH<#\_יe˯^SURk&itT:=Lc ~{4]>, [ALS7yM:p8rm@æI!=,)vkGkcjOfQ|yxu2jd|4$虲:-j j!TEB>ҟsl,wCbP*Q~6"bbQ'Fΐt6f0ȝ'£ l6[2X[ E′lgΖv+0ˣ(jύ QV&xl9S57)EտB"j#{Y 4_z~aRa.$ŪQkr$!`$\'vֱf5jO+& %1Xj01fږUi&8oR9Eh>)4ة&tbˌ&f4YQa4]Uh⣧ īMP}=lEFN>_+r8[ۣWDTSz -Ů0`Ly4%Ԣ-r.!3iIlFۃ(褴Zcy\|+1LC0:·zk_HL&Wɉ n-DD HM^(ĺ`[ ߮yPH ULf؉d)*2 oxڐC!)nI Vp:rfjh59lɈ'4(S @^,}nZ)ZVo@X2)=L٥̾! B0Er#H5y0q|sri7_mo 1Fg>Z_XdnM .K YCHbED=^`\\@!c-T瓎wx&$d*64kRߢOGt,udݱ*&WBBM YԘf'/Eg٫T#R(a+/t >KPH؋6*$ahSM ވZRjqnI]-cf% R5HOGo\ M|l1Vz&i~S)Ů2?m鷵xok,ˁ/do ;ߝ~?7(-x''Wj84E6+ɡsV~:=4%VcB dG"e`Jib],$?tQۭx.UvCt`Z8W8uō?USBلXIYLɫ&S\U ]YIfb+lWݑUoȚM^7 |0 {#vo^ȿ {2riᇳ=ԱI8u#%"vr,J<ƫIդjҳy5`HQؾ( Փe$rhme_*d iV Aep$ZZʹyoP,&#bi9M1k0q&FQhҖvގf@ݷW>c3g#A= 煨O4p+: OZZr!|`2N@ WchAO˺qyX֭65P,k8#sڔ]`&:6EJ^QW ͇8!Zo^ɇ9\*%oЉ m!Qکܤ&Tw*է~eA _*!dtϪ\lgRLKIأ]{S`_Cviw8d287X2bac=VGבbiɦ CT(~rd9 ⼭8o+beofw>HfɭEqی 9lL"t_DDJb}bSx壣0K<þc%=tv~^^:;O|1?N1,qjxa r,5psouc.Jӕ^"Yr1L.ՄfF!Iؚ3IDbh bbVbl/S3U3je ^ i0&ΆlBP#;/%dѳ-Hd'CC# U'K+M\E $Oq-b5mlN񐲅DN__U![Lj$ɺSP9%2PAZwKEig>)~Y+4o[cOP9tcU :aTk`nhiPnW~V(qFl>9\]Aaڱ-jߌ3]kSKDACKhTkH\%yKv xa([ c0^=deƫ]$Ś*P%*qT7k :բj0a<\::0L9Q-"hgDU,c)wm$G_K.AFÀ8Mݽz|B?%F)m!U )Ґ){V4{U]TvIII՗ W9{}?߿S 1̹QaEߏac$QUIν]ORQ[5JIHm N0@00@6A1pKE^H1ăެ'Ejn$>" 9k7Wi H]n` ^ﭒb=dlx*/V q- IT2C0B p /x "#xɥDO$WyJ l%B$"p$LŌ9,>CZjhAs%m'yH׍}TK^T0)%Zi,PHr$1/׎qa+(+f]-<)ZGԓ@Qnͮg8uٜLx)O 4zJEL@1%b,epi'9e"m6X ,B/@/qKl{cHrŅG/^pג"t\炏15=atϲ.ǽs2E_2,ac)0RZ)\vB츣oN鳼.*W/2ŮG{V)e=f@4E2() ##!RZ"ɘY"J a &.$S&aJ;f2Rʃp-ᐷ^jF>wll1Gd@l{P;wy(֪,ȵ%C>,CHm0Z^BYX$V!O eFP\ N[yؓQЇ4X >oǩے@F#3fEA#H}+: dNaQia( 1 0B'm|= " Ez Y7px@b &UTsqf40 ̃LFu |7pniF֪;ϵ6`.⩆A!@>QڠHA#(|s3(mϒwio |U=LfЙ~ax%gg[V tWpYUۏK3&BسR1SB1BU] { DP,)5c`]T&"a :RLݩõߝ' OC_ͳ@0UiC(ᦲ)UBdzFub;8O;( &|Ň84> }tgW᮸1ZT_Mzlr}s6Q9BHpp>u!?T[8'.S=Js#L~(Դ.LuOUN.G1Kozm_9#Q1~MB|plqa!6#1HmÐajf~a0`ZtB]&+&kգN'6j\w?8];)9l G.u};0IlS\ { S+:'Qwԏ ׻G߿7wo:׷߽P8m%AL¯;@<oz:0thCSs͂o29qB~Jّ[1nzc<$wOp?5A#%~ ҔmkU9}%B.b39Ҏ  WلO7qOE 7+E44s gӛ,hIx-arcQz;UINo@ TW` WdV}m@v(wR?Gf|pxWb:8S"%*Gc 1`PB2H`$$D-EF)tmbߏ:2ue8r1fFhohr0 HAeyg*FYŌ wPY.3,۞3 Ѽahˊ\C"}&|a|TfAɕuJ/&1X,Qd eJ+gI1i?^MZ1瞝g?2K:s4,#cUkpr#%ڒHT%Fe<'lJa^Aa#A29HysQ PfI ;ْ@:|!<}ig`:R4I WOǷ.cR :cb<)8x)Z*n8uyD#<CTbrHx*+M ]ɑPR' 'MV^׭k|`HuJ ZGK2ah՞<#,RVblIohO~Y`*o~JP$|ڥ33 >TӢX6,r%ˆW_ σO?H{zmZoVFv(~&=Cd0֗iMBfz^.<8Ͷ*k ^lozzH-fPIu>&ޓ5ȧ7>%baW:05y}3|ΌFX|Sv!2NC_@oGa'rPϸNԊ})ىJ9EE@(K)@`V|VS ɫٝFA8,: 8ig(~pa,ԇ}:4?}§+W÷i,~4_c.!\=t0_x HXn3L@umj}8 sn9Gar-# dRKJTR]5awƩ1"Cp8x"q؛jNG3&s\GӕGig"9D+>T w}8(-~aޕ^Ht`NO iC]]qh官1Y0Lp0bJcD2Y+냉#eDDL ` Xy$RD [ܿ]i%U/+;dNZaxAG+uBgG]go_ֽ W[WY ;@&t;~o\@.G܎'j:D%Ws;.7| !>ڸKXY^㥏gSYw1S2nMQ%2)|8TSy>[٫jۓeh+Z a\L׃DjJ{,(D:¢g +6N ET[/A!>{vӑIvQSCiW]fzT?B)p/nB_]y M+ Q AxwcmJ S8#\9`mNcF1ZleF͝QFGR*:d2sp6rs8x˺̣p2GϞXAzn)* 34=A$aptLe}JK@Kd!R9ʔAJ-ƶ+(}Gw-h&P^ tuH,.5z.<`%%K&p(M/ovyia,%%c88/LV$AYPB$Rrfb]n1c TU  I# _o vpv-b1ő=Lf-ɑKrlʖXVSY)^Q_ɔK>qpo"ʘt8d9XbрNrH@OYykTt$MɰlD&gm&a&RݝMN~'NYUeT2r@ޖXirIް_Jvkh{+ipeR^߰f W5Q8PɃyɕ)}Ji}gw┃9/aG㣪*ƜB%lll %ˬt[Gnb%x!E@cjUKpcs/?tNy{1X!*iK_sQ`!8ͦ5 XֱVIHa1OEy9讘#˻ ~M.3r^4rGo|{pwXMe㳋&=!a6dȺ\J~013AU= QLUs&ƇHfo]>rGŇ#AWrݼN瑨O3լ'_٢sc?I?~hy6[~qj󖿍zJ=JB6xu/~싈r\^ߦtڰ;u%, cm0&eH7JEvx0{Zī~ɻ;py۪wn?@=:î΅D38 g34:OؠkӮ 97WӼ'Oz{W:U(rօ6WK1DeҚzW1.JIIҏc~:Z&mWIZx6[*tRVgsjp* :`* cd+(Ro5/I5}a/d2Nr2 6d]Œ8)FUZR &ᘔ)QmJJdf !|f\mg"saNߴnmiΆaO<RkrXQօ Q`TLi^1bfe e>sz~Lгz&yv&EJPT9BNAi_DےZNy]tzd9_XWB" ӈ J!@BigT@5?9gg| rnZawu^tqhdhנqa{0N|7^69A3q%ؒ,cRT Q \WS@LΊC/>*_ `_;#ʿd}`,pHN}脷?_o ~_7X>̎/{8/uG-#Z.kw.f}^m;] ǥ <$˫/вniWvt&mwcs/$.8ɟ|ъp,pe/\#m=///<_MQvMN.PS |-: c 7akZ5;mu8xsEִʥhXz5`Đ#@"HQL:یn.$᠊u;V1oEsycg!Ԃ)Y3}Z_毋؇ 36 v1Nz^-Դ&?YKχ*X/~z^N>OE "o.=?oQw2_ ыqd^YY#YG=+ù ucϫ!L 7Py rPݤ˦Wc6],r?i$.,P#6,}}?Dh+|xs<[}jVW|/8矎;AGU`H^O }[#ƣϧ/ڒF .bw?I@h}wttG^K6Mbn:{7;4`.&Wbm,O~ w!NN;ۢ|ٖyۑkmv2 0?ݙV jʏCޞtMBЊDfժz=T5tz*&A:vY!rՋm3Uk֕eAUe)j|>${s-[mn^0ͯD1bK.Nů!'bs>:N}H~T N$XdX:4Ӓ+1lרb(%YBVdoG1.i6AV_&Q0 + eDI`m*%m-zN6 : )B!ޱrQ&T{l@ըYLbybrՖ.l7 ɨQ{р]V^;^Ά41(I0RCW0vnE7"T3XQxuzȅ]>;=%P%2y1/xnrgp7__쯯?o]QywDH'M)M`9Jy磬&Ehc0}UJ}ˣwR* rY>3)PjBS2X+korꍌ݆qnXM2Vu8aޣx///>4fzqqd[|nvyN'`''Nр͔ze+dCٻ؟ Y)%,Y`jMQGercFYDcMMy@%TBNT $+qUb^ٍx:+k&桠v7+j뾨oBn:5"9ZVsB'6Zb|qk+9"0ˉfj!ę0: b! 3 V4Y5c1&(%odlNJĨfVpvhԯ"<0 "v"錈fB bi#Fi c5Ikdn-RXT XXzZR"bjkm,թm B7`jp^ˑd2䠁 tFnٍ޲L5 .Wkj3. '\&Dt˶E6%]_b55WA]0Vl68q}a78n=ޯzҸGUk-VWw^-dV!WsK{NvͰ%_57'2m=D'>Nщbd?mmүb_sX$ʻfR u>_AB k1[2ӎUpvƷW;V{.]cfkO}7dE6e1Ȭ!8S)WLQ{6%@_`3RcgKAlo3>e>S$CR }gx %DCc\]utUu8J+% /% OxHXG"\J =&y$,ViǬQFYJy>%lK7}48t~@^Ķ/Q5 ֢)ϋۛ!7qʟnmvؕ2Ԡe6'79ʣŠ# q4թ*<[yؓQf^Z{WdPI` Å .)@&f6,O#)TD2'TĠ0AP8@2qwXYaQ}|f݄a(6BUT}qf4 ${eKԑr:jiF֪;ϵ6`.)DڠkJ>QL#5H?5õ'>K.*kH:e?YߔJ'''IKbz_e1@/?I?>L~ ܟVAX*"ً^ !|_T+û(O)D01:>RLݱNù\Og Ϗ@{@0!:ۥ2\:$UW2_ºi& Yc~?>3B /pM!D' =[6^!zz)!ABd6!߿65B%T˩MO|HY/ӕF|itV>\kbv0Bq|\[Wk[qeYQ!9rqa!T7#1~HuÐaJ,oi} +V1r՘ijG%Qn[:Ma#i`ĥO_LC6Y'g߯}-@ʏqtW/7?W?|s:}o^;7RZ |=M -m04Mz+r͸G!0fn D엋/? ԥE tqV&Xp%q=_`g?Q@+PI,+ٝB(>Lr^?("(~Oe:O7UVh]_;C;ZQF ;G0\:.gO8uܷY2 ۣL:EYݠvVkXu`gKѰ`>;*j0^~6`EwɂJVѢGٷ?.k^V*F|܌O%N*Js:g0r/FA:Iͯ~᭮'Q>a%^ Š MFKPZdwBGtO O̴aL]@+e]+7f[Gzxh?xgv1Nͳd_΁Rg7u^Y8΀d x2OsEQD=AX8O歀ҥyu\qXtg xQx ,f1grHx_K E/pKv&ttZ&S9g ۡASڭ~&\8&&VQ zƌƁZ1bR;Nj$ڲ¹oZe݅x}!]__h6d)Q)} Kg؇}("{lmXBW#hP_pyX*`ޓtWLw+sxrhzWMJ a/Հk\ Zt%$;\@ez6;W㸂UBeb394e<4ELsϝXER˧NO).J%VkS`iGĄAȹB谍;"X#G!2AHXG ͶbhUjl[EĖP`9aFhNR_n8r nw2+۷+a>U6.ڒOcװD;R*1H*WR]tUi\m~,O{|.8CiPhV>a1]G1L%-+Ŭ?(G' }Yal(ӑסD~axp7W89UqJ>_(ɿsa)GWL=_8ekO Y,)v?M?MӪw~PcFyZhր$˽PR d#׌xcWͦc* bfH{Y8[9~vYB Y1fZ Z!~xM)~#'Ą^Dii(p%ZgϦ16'usxq$WOQ\)&kWeYp T.ן_]oNu9Zo`TےΪ!Ei5 33p#lh E-,hnL,q4ZBEjQ۲ ߏdZl?[I 3HyT`%tz @ϻ:w@?n/fORޖ>A Jin:DZ6'#DqkUᶈ-.ծz2 T!xUXǗŽqݷJPVvN\=q1'H\TO= \o$(+!?jR^#,僲^+e컺v!T'E=&H#"q5*a;+CW J;q!WI\%hJJq*Ae' Af/`|^xykF)y\.7gtwe}(| 2Vx{'Ƙcl2{yU6e.IQN)93ZM0&w¬DKw frd%<@rFϵ,wJ&0÷KHFxT QJEh9:x=A;GSTAQ![$F\ Ѷ-=s%ǝz ,ahL&OmW ZF]\%(SWJ#D4wF+VJ`c<C#UDU=FU $[c-| ;R)ѕR:b~<gvh`"m聱6]z`\E\6և֭+{[A_uςP0B5Rmܙbo:;SL3:;i3ظzPT9F$㑅H>HAP0#&ZH0<)c" wh ;SzbǓїCP!oc??d_=b9!E;xWQL%T8 ySPSDc>dc b΂wDðqa&n xu .9s+`2NjgntS.fݧsFsff)LM[LwخjdCkフ<'ZVpDQ$`HHnC91҅ W񖞊;l\Ar@ISg>LM DGLmA^R%(,M֠`U # ԲȝjEACI*thaڂxsCP q`9-RLp7ܰ8wk t<eEFg P:§7k۹OxEě?n-?EQ1z1em%ru8G J(PXs{( aKM)TV;f2Rʃp-A]^jO#-s]8p8Tzj3s/AaDk3(VP* `N)l KNC'f6,(Vq$\@KeV8e*"3BElHGc&=j~'[UAyxx[d(BcCa,wTJ茣’d8&%ڡj~'Dw4ӶggP|$w yZ#kܞb4 \֚#y9"J$ޒ&a83*g` }uKݎYhg|hqqNx"&*pC5V("H)eѹ"Pb4$YHT I2ț#FK YkY=|q)SOp!eLXI1Hc.2xfd2p!9iB6V56¼sck]P)%5dG.|I wfik9F7距;1LJ훅'?%f]YoI+y 4,eAFdLIJnbFI]e*)!f%YY_~GurbX5d0N;t@u78dG)V.?Uͪ R*@)f}Y%I Tȩ86X}V2J!Mc*YHF:2䌱J IxQy51j61 !dvx7gɼÕQn}lf͋H}^8]{O yXw LpLϤ2uSꌅŐB'IR(Gm"ڹT3(uTY*_ٳQJ0- p[@R)B5Ɛ7RR]éf7A|}v0gM3Ot8_9(=ջ#ߞ1Z/! K .g[ST.ꄖ_!mJ @D)]IM %Zd2 P*IzE+H#UZ goYCcwd4tm^~}>#[-6EoT+F,K^ Dt]F̹}ْL) Cg xUc` f8%jRc. /%s=W`4V_lsLKfсJkQye!gKm1`v&{$pu|%B V10y- `kl8)b!E&+\| Gy򄠕d/[FmLlDI!3l%hY qw `DPA"EXϚ @=7fN?T@+ƙdK`╓-0hl<55oݎzVz>k&"*i|4Ciyr md|%6_x|v eI Ghd՗hG3 9V;lc&..dsGܤt&wu^oڭHTClB&QdɓT-I>Lg ʠ.X]tH! c0P FEH"5F39=HBZERI(gw\ &{oɹq8w:jRgvu~_geH1_bzl/_ӥd=l&[&ws)uLJ YAaRh;uΣ hJL*i$0" t6D-0;KB+e,3jKI8K*paV,5dL HU=')ZT$檿Jcl8!U.ǟqM^|1E{Kf˯dPMQypҀETI{X>u^U ,H!H$ms[wQLlE ISL gr*9KvL$bj͆s@-f\6#PXF,<)ގ2-.ˋ}z/s\e<]~/8b2:aDd斘0$gOl (B0/3~)FdQlIM{)+V6u6L*fOlۉIXkQ 2ǁ!O?h5 IgbIDZ-ۢGWhpd`Fa(U])`l66 !XiBff( [ѨgESEtLlFuan g?FZAxcQ5FD5"∈;wb4a=ۭۤ>{V1V`liQ0Fxl1Hfްm:{g 351pNz]2I4McDl6Jޜ~8V5VRr,.6zwI*&M:e 6FEɎNl5UWA!ZʤcamHaSPt ,@-%|Q>kA?i]{7eNp y?*8B7G)NΠ$jEtX8׌5C &H.34yB!y،41h-W49vs| a5o7}VZftZgΡͧUDhw׵׏a*^ʏ[wܺں[=Od6P;Cnqޯzkl+󽖻塖Tw|@=WxUQQl=YnEs}릇fR>|6|_d-6!\_kPu2$@ʰ\@Ěì?[WM~z 4k⩸1mI6J|LY?Z#gi׿i5eMޫ;[Ȝ<-gP׳x0M7G4ie[F5uZ& ]5}h-oo4iӻL[;Vhww_?|OٲWW~7A_~Pg=Rz6/Gْ|\R8>uGl~8ci<2j6a\yqB/9<#=AƢL CcMV]Db7ӍE?rX+ogz#J~<Ԇeu_k|6=X * V%s>꘲gCL#t&IHNH %OQP_zdZ?./`jn{E_yUߛry$\M=>fai-X 2suO62%*rP63]J@\ˈ'A*+5 &Ɔ$-dei֣( r:-SW +:F!Q-m\7l| *1{ͣ3V+f,ik1Q(1a넆͆&aHO#3jw{뱃[?>z75r?n4#D$,-"e.^Q O& 7)Nll 2>xyg+( IVP$4JR&`R tеb B$ Fi<'rhl/_E2{`ibSj6K{dWE)痣7!)ZlCG PоA})jm]=d#īm`t}0K:G]1 Ie&d gقc7.G,2Fa?:#w!ϝ ,|yp]O.gHcb,'jHǃa'ӵ\.nVxH :(@θM>l4nwxȦc13*ER 4 f= oDt |Ma Ƥyb8ֹ9|hsw<.?"E*+&L)S a0J٢ej#AĄ2v&W%C7/뫡j&ǖq)9Җ^q#C~3PεZ?|>Lx/p6;$0K2{Č|jN|uhU)Y[}om#G/更ufrK1ƖI9b$/emˉر["YU|X21ZV{rX d7'ِn%_×w}짦fI|κl-0V+sd "'d T*cvu:FJ nN\u:D,W08$"Xe` uBsq,'򠝑d.ozd BI/ִȏørPo↲#r7K iFJ1h^*HhL^ (p%Y2BI<>94ᶜ/I9^ u<:亄fJxЏsSȍQ"Gi,72l,# nRF4'A@-TCU*ئ̴/˰?-# T( H 'Na+(B&Q& FbIӌʨ lQt֏[niA!lI1)U@&9 gu1YPRhfV i0XgabWGa;_3S Z8*=-إvg.t\L+%`vK3{o?}?L~j*OApNŪp;JH|_]2$kT*,1wN`B0whlg4&xGܛo5U9|(yPR 5Mg4nx8 a\gLO0UFKϪ(fڞOB{{ɥ)͍V\ĥ&5.v )|FqiW4yj/.Fqyxq:>\؟f3b^?>c+9 ,' KE-%oxQ3|q3.lfYޑ&D#XFtGm{:@.lͭN;E}QX^}:j6vvǿRzTO޾ooWwrH'RG_~U~{q7>x`^O4c%rk! vm~]. JMM +hZ.{=e]vyAWǂ:o1f{q6$//eo1fl8 Z4Mb~{덪8yEC.Qe85*e#f4#A |¯N]…$7Gݢ QZj AENw@h",fBK}IpW4Zq4(oTFB1bFF/3g e. ӄ6ggDrtg4!2]z`K %A'nMXҴ&029s݀dj҃t+mzV:6ƇY-aV:]¬XG*Tv t^P+uZE!*eT:PUh+R* MvϺ&,*%²B.3Yӷ9ɲBy=&ћNqҽexdr6hb?%YyX=\Oݴ&-v8.3ώpqܢGBG;D?(yЮK". "q0Les!vk|Y6м<[ڏޝf]óufN{'/GC\\˛Q;w^9GC8&h( J02lY.3ÉCZl  M5]( k%-oDwN?tA ˏSW9;O@KA&_{Rcj_7KsX}tw,;}f|:ZG ToTm6ʤ*zyV<`w"n'^z+OrxB+uE:OE]jtuUTlu喜zRs]]ݍ`nn:8NT ԕ۪UI+}B6n މ\੨+-+ `Cuř` ٜܿ;UxYP=OznܖǧG4ճ]6yRQQz~RP9C02Fj&`)Cs!DxeKr.,߃WWz`乜Q1eŪ*}읜_)L\FY{ * O2IkRZ-PuJpk,E\}ḍJeRGFj'L52+kIPl]]`,l\K޴l#my?-E FE!R)ŃJJkm=ĕ6Q7\DWv(m Ԗ5FJ1i9B68lj$5LP2H`:ȱ+rn߼/:tBtu'i})r۝ǪdvBT.(V;0:TI zC# V"yV(#ĥvQꐶۙN&Int,gAE5q 1) -FA{B.UsYZ=2QnWH }Mϒ;nBȳljPwc9댜%lβo`T)bLIJHr&2Hx`% |حt)(%c~-PEeP 4YL‘a1iP6:BbEIjuhk'mK6))h:&ż*gIYqȁFIS04[*NF!'#!<^fN!(,Ix%_G)U ABA0O1n;V`sM=EV>ٱM3M.QQk58 ^kc dQ$-0kV<lS3YU:TҡG'hئ su//G#BZHӇӀ!Û;2CbIz2q\d Vlz\iҩm\W~sg'/]pȜ`ȃs7QdsFuutTm݂!6ЖTG.u_YR9aa(e&n c tHb)3K^2`AGɚ5Ndh-(i8ɔ8t9gfugnVOr zlC. \ -rkI ,+PDrz&+| NfRnW!VR6GRIJC^DJ%C뀌ignu)((6*ڵ._P/'7 tD>,ƁˍsL `KҞώcY%f|pIrUwK01%MfN(C.K@g,˭m vQ8O=]5hk7zgvv/r#z'(~_]L9lד{\δDqjO Z@) o?V}4 gT3a4(<qګO'(f%P[ж 5n7t*mfof?n(I_9+!s;$8lVie?_J^T풉 8)v_}]ǯtKgt0 pT"k'$5B$'MPQۉNP7~қZpa[캭2xPۭ]Hv)E 0Pd\gT{2Maɑ2:M!Ϡ#EjֱjCY5桭`V =R#/2^|A׊CnG*0]9OAdv\ H(E4pdBBd:ˬTCd9km<Ch"w :#v`tY mBZ.zDyr V*> hP<qB*sHSDhpSoSTQ5~)sGa,Dmty%/UUdeW\y)y0ha_iD;>Nls$]Ôv2O1Ek`5 1E~v;7lP(}=Gksɍ'f]bZDu'9w>jВ{γTSnIS;Ԭ*Kِ̂gBufi%Go2@Ggp8d0XoʲNsd=ލGiV1 L"6Q WGE=4:ƴsy[t*z]BیùZxDs_6IWL5 yQXޓ:oh|"7[8F=+1H1N[\LF}tMg/c_4bjo7\`&#JtoI&KnΡ̰]>6nP$G; ֿܠ ~ݵ|/Mݑm-GTdnwrwsw㐟V}{%,-OѤ^v-kEjmEAՖWR[k.RuVOb*Pýc2RH9QqG!qp:W_zWYa@ lJJNdR)\$FHTWƫZC/\x7ˋHV#YC1_IZxj _FUs7i/7N &*9I+ᜯ|"$Y^}JNGk,N$j9hHJs+J\QOIk0pƤ4 [eidboP)18$PKg[(O4m"*,,uΎV!P.q$=;#?>Y4nj& s`$CUȡbpGhPxR&T5-#Vy @xd͸ƠeZ 3jPZl(.(쇵kObtgUDuz6$C49kA:KE=e}|<]vGo%*(*o5u,1,F5 9cRRIe4fH﵈1o1ʦDزI~])QlFBdzDp*JTid,6Uaa1 9)ƒbrDyn,w$i6NoaO *7oF GlwiE͡>dOKЈ_rIlnx Ys=,&W:(ᕈ٨vĄًSD6:J g;bqWTcAbԱ/jӲ=G^eDQP`t ^Cd*J K hAsj nD4JRiT!E& 3A8$R ([bَRHۖhJ*}FD#bXR QШ&KzA6 "62!hNEQCQDƨ?rjҔGFQ4pj&!k*BQ gܟ[՜YJ¥9qM@D V:`B|]k-J;Ԛ F9[@n!ǂŨXpϒ>eNgpٜ[o-'h(|۝zW~֬MNCop9>kxr|ߏ/ 'ᮄ~}o7Z;EFhv <"Ia!rrm8KЯx{ ?h~,|\17]wJ+ɩ0BY[Wv k, [睲 jJZs$a!ia[ ǒ mJ+WXJtjMY՞[»w]p7ȍC4U+]^/U غsm{\ze6o^ͦClYaunZ=5\Y١畖!otsnڻ9B;-+fvYo$MwQm6/L|_hn)L(Q^7WyY#})=+S0ê Ӭ_=yt&kzMw3%.',6G,oi P.#:\\(Kk JIGm"ED}iChc2QT6>n+.mگ.MяՄНMߊKz]5sɂRz 5 lɂ:BEoh)Ggdd~ iyN=0  68PqIv#u ITS*7[ pk j4 }(Y\0cP4gC@EaB\xx2;=tBO<_5VQd a@&"Pts5sG,Fy O]e?W+#@tH4 B(H@gSQ=;P87̺.ZtG ! C$f7p} xa`w8y|B͈` Ģ@2J(BHbx)wGf},I~9ߵq&X]mX)0? W$.{.j2Z OfK;T' QI&.ϬgCPq$cpekqc( Dhp@@{Ђo[1GTQѢ#0/]\pvHt-7E {7T.pyшxsYNf]S(|KG# Jpqprr\gQRux|+ Iñ0")^C-?MnǏ5|T~*IgB?,(lPq T[T{E­`_͉pA t!գؼ/iW~MyNhBv>D-&vj2ۦtmnη63jjpGrtl0+fcx`"N=wg)L5Jq9št\{QCXՒVC88oh._mҠ6{7s2+[fr ޭ%Y;8? .7'Z% |8Ocq6HB._ msp0x8VXp 1^)$FV[pN$GDJ_,c^14]QB,^)ZdV@QKod>E˳4x=R[JOIX(gH9RtJٔn3Awy}y:Wk-o{$rӒ|V;(q]\k@jEi7UœK)exHWXϩIB0w|{vGooqkn4ؐIY)D7"VFd$ I(Ժc6Rc*&`5`Uq V F9ᔕ,WctR%C_jq72uOǛZ1q)NoW y*gђ:ufW^^}-yUi}wY|u{և5 ׌Jb2R"j=۱[m )V9*E5CD*g }:pQAJ;7ͮG{y ou$ExN2pskY$& Xc&Z5LS,F\&pu1иp{cΠ.¹IĘQ[%t b1ΏH"i0TxOד忷x[_XO~[Uhֱ-k=k<|]5$$։;# Tr#]uq[ !{ .\+n(Wh?:_ߓfywZ?W10i-y BW=CQO"@X)(PۃbpOdžJz}f݅BRpE)=#= w-k"L^y~ޅlȴ:rz?O ,ٓd-3=]~kziRA>bWY}~_~_:N^`nḓUc5'ЁbkD4Hf4;.0J('䊜i9MpG' l5(N U`=uXsmc: 7 󀒟3AzSr:vٌpn\"Mf۾^k5@G( Iةr1~TYNm8m_Q? 4o=hg/1O]|X2-s~i4<;oזۻ]1 qYo#tI7 .^ú_cMkVd'\ףBO96LVF6u1ɮvUwo+&XH~D$䮯0ASzB=q~C icٰnG?7IA_nH|- l n 6 p@`HV[Y5{ff$p$ =lVbS/߼oKm/#+οQW0n>_3n{ZEw[[~BG^qO?%wT k?9*rd.y'k^Y~D5=9WR,r~]u5׶3;#+bc0wKL7G祦 RSⵆՆc=V-O*v$x@cųo,-wF!ZC4\?IW=>~'C'k*f 5 zu\EǠ V\ (Pc8O}GFyU.EG5;F)dơVT $yadMTJ&t T 6'S/*Ywjs-oW]ItZ]!7?9LGu.\_dr@kqpV4s%BT5` M[䀝ݨP:ٖ=!0P+$&P$' 1jȕsSM}÷7|*:şb{|:>Tr.2_2AWZ>`X?-;k=?|.lWefR Tb ]fYkv9s_kt]~a@WThuy(U|φF;g:i}y4?4~vޛřG~ /jyvoNߝ`۟~؛/tbګ[g۞}'q3Qy`oWo bnXSXN`-n o>Td-nNwEDe cZ0kXZ ?0u^[sf˩9%6dt1KD" 5Ue\ڨت|MLxg39a1$|: ~.1xs ff/d~ z~Mw#ި~mf;G}an\J'H0J^i j\Dz5*M&ņ>&֓Q| zx<[!qdi)'G[y*-_Tr5h5s)!fzȡؠ9$Cr yDBۑ{6 ~p߽ؿ]H{ok D[ +t9DJT6'(P\g#>li51D Bo2b/r<1€w4U;n6 '~w cwGr}FMe:ф>~_Ϥ=֖ӮU,WĶ[*D ULZˀ2HCkFDҚ&k۹ZnJ'2[Mp%%r6,!d t)&OE2|[H!8k..e&_TRш U̘j'?9.%g͆R:t!,B58j؀ѰиTHlogQR2FgV5`Qh/>{И]9ɿ1;2in7&='Yg4 E&ΥXj4 RĭJ):BDYAZl '5и'6@PkQYٰGրTJvֳnQv%lqDӻ~w1&-UmS@E\s_THсMЙCIz/e/ C՜,>TP5bCDqjchKX5AnP+N~']!SU 519P2ɡg֪Ť(FsY~X\(bĶ;MccñSjB\DwY _PFz*+;U$!{SPf# :i[7ӂٮ(_)ʷr$e`2\@Qs&G-FoOB[3BԓzS3067vE;͐uwf'//Kaqɗ@K:LH2`&<l~lN))^YF2q6Uk[p>tIonJݡ; Ģ,u\E:dA^SR5UB*jXlWG*Y#1 < цH!Ԋ24ZoSs;N] 9><v}&UրՌ`P+)%mi6V5 &uZ.e`|M? 쎦8U1bͩz_TĔ%6NEZPLzoþI5j rUSnmq8e吢ZsJM-}!QZܧkծ=~:VH%T(ZȎm!P<6zj%pnRdRpݤc;c'u2pGߗbI:ߩ5h)@@ХڣoC0wjPN@֤2Aї[hn ;@k-:m'rc]cbZODPBLQ㜹PU-wn'w 5u7D;vXVV}_Y?gq@ WQ‚o.*Ls9BVQ@(MEyK]qGO3: +j/^rmu1VK(5voȿ9jk24i=!cۂIȿ|r9`A&lՄj}{5޼e(Jx֦Œɩި!%UѓhL! `[{mL(i`FeM:"bfCro`MہU+Kibĕt3onJ-6{׵3Lƻ=@r}xT D`g̣:mZXN:iۉ rBb7K "vq]9wU= S4DL8uޯgcZ[M;z:o̭ FS4Ƣ*oBf/Txq1eP J&EjCWvkX-?ۖWq499xvt]nːޝ_s!雏X^Zh5;krVbt(2MMxMZ⾕*Q{&F!ƤS.a>'Ljb&1>MTUnܮ]5YNOyU:ٹG_O:l7wU}1v_1(=<@EiEN%p!481CLh.uar*NO=^kj8VnxJ淇'vBZ"Z?۪ j/y :) Uh)0TdE &]٪qOq)NqBDc;g&ET#3J{ZWY5|CT1Ôx'L,pB]oж=+)w>s) @̠!wr!9IC>:i4AT1c׭* ^q\VOiNsεd:m%&,&bfg`Jul5G]MշL5 *$$\kK%'"1jچrэQ+ Z3Ψ8wd v$OxQJȾjbU\bŎC-}pVB4*i gR:WG$XoS#_(vRtpF!6O$׿Jz|xs|ʔ4z\_%st!_Iw/'<7SǏg˩ pU==.Y BSJAR"jl"z6"#BdUž<4r/.IO6bzICHZ٥P hhiz%#2vDݰJ7[mfi l q[gg;}א 17vهY^@_? txx dM9%+(ORK bfط$e'МTDƚf6- `b ,F4˹WoK-6 .Xnc]ZMV{B~ S%{@Ww5L^|Ce!c NUJΧ6>g!eBxȂ /:֔2j0GQr Qa_T ls=%X3;NX"Lq7FSLrjݗj.NLZc1Ɩlӳf%6"1?Z%hv@' :؊YpH2NDeCwÙn܁UWMbgBv*kk;ٵv]iƛ 9dSqyl۵1j6RuX"YUg.n.=6;v}e &<}yk4Ql?1b^|{ܔ@ُ&-k{+*agZ۸2/Inw(r]e:u7Ƿ_R)qE Im<(P#i(Q8Y@_rϭ4(RN>f_kVk&ZzPQJGEHfRɘc6FIbu㗐V9?X~Zӵ3ٮ6{G.ܾlk"6P܈OM2wz=Sl:Sl{XNmn_͢ChCڿ׾m{ww^UmJX5W?^zmxU`ȚxVqUiݸ6f^%kӯ᪟HRsw(;YJH$y)%`s"Aǰ1$TVxa'!rƽb3(f#  nW\:,BR pB5ŽF=z[oI=fW^zsi O$uw{owӛ)Jj*tt;nc$J@|-4 b@,haFO7g2[^4^|q&uXi*y: SF1%b(nF HxuӴhpzvrF̀wqaڒdB*!5R]sOB':uތ}$^OJ 8bqK$Ԍ )a%.Sl[6E,O.ͨi M5o1&o:?1j3ĊDb}*"ߪMk|k]u ϛzP(Ack5Y~Vqۦ7) )҉w+hbwqeޖ*p1R[fvIj;NLkΆQw;*ΡhXVI+-A>2%T> kUĘ{N~䛈2Q[6U]oԟ'[Y;@1["_"lc+SA &Q%1߯#Α:8 ~_Q*d*h:)icPS(VN;)i㰡^uxb/%:㍇^#亘n½U/8Yx(ᛩ}3sť#W꿕@0!p `T\\6tDžv|r8(E8 k3+lj %R(^۫_U:::\OW?1GHN4U+H`US?.`8M͎ͨ"HɏڜY<ៅ׫-ɒsy8iT_튓(O?_$%>>y \Qb5$iH>y0yYe >'~,EGWBϯ[&k֓liuB-#E)'3l6g\nt z~KxCp\>pz 7x&|~oc_Î "8$G кazhX)rՂWt9qoBzr-ҏs"$'8]Jz3_%+# N{\EA7Xhdw t3_f8`bZV|nM/ ?ad5i%[-ĘaNiRcm>7YpFf٧$)+E|訟Oo@?ca@c 1`!yHM0 DԢhS{FjCE]yw KНco=uDhYJ i1})H00~ n2+j{(*^ Weyj yk/",Es8̮E0Cxß o\ x:qT>,@. @YQʰb[#gi'Oa _+c >(­ჴѾ ` nЪ=ܶE^._kW9<[ZYlnXQWc#+;?.NzEOi=~ʜ202hds7fWْlr9LGa>+,'Pcb+/?U:?gx dҹVpb#V2Gě-9mVI"!{DKSs] J\(wp)REk3@cĜE`P[T7"뭗s62gvY᷋/6eN8[@dCOda 6|' D&4>4iZAA>fؼ'=PiX˟d4kϻI^,Yz_ZU@b3!S]׀+n^ͮwL/w`̙Finn)pJ-A+ 'vsaDOŬCtL!8bxBGCJ9**f=N;[B7zf9A ;>茜6xz (Ipae>-+a>]Pme:pa>u/P;M Ҙ]5bz{yTy].xP2 \0s,_̦.M;kHE>ג):M1ex&;Wg/4^ȝŵ^v^Ge[pNWѵ[ioR=Iޤڸb|:2K֏.p\ Z⤢4Rk00 .BjޜڑoPoPoPoPo , EBy8&qg`֪5B( IYAˆ4Uxd!R&RP11тFQ4`pH79-gָ|:|'|x\lj'+7WխyXyrw ¬Z)H uZlTt# N8=ҶS![+ ^NEߎ[zL+α/Oߛp>焌G*g阃9c*  A3 =RRջ:=U7D)/>.w3t)r€!a܊Ʈz)(K6;v^+k)55`Y&pVX5f o3wǷk{ASm@jJcv60a@ D/&7~Ŏ_R0kidA(+?m(FU4n+G t ?R7ƛP|U,GY+uN`+e7 lvM쿃[d7&rpxp7*a??fJ]1اI-٩3YTpJ>*#VI/W V2PٻI0@_)o>ъQC , JU|uv/6hn4ٷ 0o0R ϿdoJͱ"gaaZK+¶.k֕UvݑZY+)=7uT d%~1Z"w=7Z컩TrܗbL5ڽzKl='='='?B WWe^]=c,ʫYboEwaH3osA#S\g{ȑW~9Ovb{3%dqW䖬j*b9,U_U;Ri=i6|LkQ}&3Q3VhƟGuF]!_wnN—ˑz}[H˷Άih RQIoC0SZ)ꥳ.Mc'Y-6U3ɇX_3kiZe_A3\ "$,;XZ;|IR{2 7gub BU V4 %El1ֺwQ+R[0EE:lw1Y4yHMXvz[0KVpf9D<@YuJm̹{מd6:*(;zu7!kDZqE+>3M@&.FϠV/zQ~>;MUUwiKyE[: GR4>P沥7vq:ǐGnI:۴W߫p_z 9svRu21thzۼ"NU(ޤ1b}muCY' EܦH٧aD-ڹz*WD]֎<+,H z_pz|x?VFa /0-^`q9Hb$D.M;tffR:hƆ*\-TQ3M>ƣ \=Wg;O<藬ŁAWCsur ؛}L&&]]%*k{'ur) fxBTJrr*Q)NC_ f^Uy^71YbKX|1_2‡)\}ojN0Zʮ"D*WQKUR;IJe}|͞u4gSvoܯ?\%)*^28}`w뼥+?[WWںuшo"$"*:tBDO<3l'JȍYIJYL Yz_0Wni)|pw(R8Y!vp IO%ZBQW&J#"@c7%rnD :ITS " %ښFKYdQ ^LpI {,#iZZQo,AHj!K^YkH:~n{垆cg)3`apT]Яh"y޻_L~ y9tר P4T_gC?wK!fHS)dóH%SD'blnyb~m[  85 rT\Y0nw5? r8(YX_cミ2H1|MLD;i{k*kEfүT` R!ܿʳ~&5B%TˮMwN\QfmX*V4Wݞ*|/w.l1BU%17`R}?5'Qo4vw[[0^1Z%7t inFfևѸOhcZǣ@ n霮 vKcvKn)@v41|r|3yT~լeV5 7n?Q%idwSV}ڈ-H3>\'\[n_oF.(rU^_*_JaY >ӏ3c+J@5f+-/"WU(m3l-RM盘g@5l񄎞 q:ڡUwԪ %u@I'G272.AKⲷ$*8ʝ7\ Ø8P=d:FY^PlIohO(Y`*7ya!:r6R Gu:T54ӚJy4 ruGHWshf㞶Bۻ> hl%\XܹSm)_䘐lYy^L}\mͣdb-o>luתH = JsJ_8SO29cT" 1- ־pb"Ѻ)EJ"%vJq9h 1d.rnH穱KHIS-Q Bjq)kVohr` *sVA-m=3 rH ÌМi 9) u֟-AsR޺eF|8[evT`bNx}=krl'0]x&3$U8 g6^Ƚm½VX[FWvzI>AgT:gҴO!B2@?1IEi0 BjړoPoPoPoPoMyVȼe3IkU cNJU:[DYV aR*Ljd<)V)ʈhA #(H8Hg f#I5L_P(88H~.V61݁V\@M!ƒ0+VJ`c<C#U ݈"N9*XH>Udh ADpC@Fx%0l,.K5?8Gu߁0'a8oM[|O_tz Ee{^U4]-'ylW4c$zs7M BFCR3t,@1 Ѡ̄'KUkO`>/Ro8ʤG?.Ϸ7::dyfy_ŊMZ9ke-e`pM@y ,QKVX5т̨jgǛ5ɑs6F~ܨ>&@{2aɄXiqD܀`MTsKңeЫ%*>Us{$Qj3UAh*},`` iUVm1*_;=liqQ*Tԧ*bI[ !!$xIDn=|0),Z e5BTQc,-g!)fLDn&zHi F*dcp#iLTLPìb(X66JN"S>}/?W-sWHPX qL!4FAi3c)yNP+rÓtj PNZCxy'ENP 1:o@I-XGqPe&wRfpe@L0à D:Ƹg<`,CX+F"IyDT(EDEȮ2 ߢs "널Q?{H#pû{ $7TDw٠DI5f B!?|Oa1 >EI(F$ޞ|v e By3prR 9:lct.5}ntMIhz"Q Cl+ehl5zʊ~Ђۥ!T]*:Kq:!$YxyCR.CHʎ$vjY-eyY 鑙 D.fiH$)OEA)PxUr顏d*r)!+qgPs_͆s.';t?v(R+%)@$X4J6x{vPc)sc r9bW ^g X=C)Tt@ RLm ZiDJg؇ķ0 E! XÓJAZk$%R"EU2CKdF7>.rYJYjkhb˨,y /$o$.ƦkKe~]ڇsFk~4q`A?jMnm05BgvI+mj +ڞ֐,]V T hH.( dV 1Ro[VqdLVVX\[Se y53Or%XK^3QZY6 ax[rAe } EȤ_ x=.#qM^T`y%#'n<g;0ɌE o`t AwD$ 'LW_ʫ/jJ%ČRճd8D6 KJd0 }ԩ1h #,'#1!gE%P&CH*%;/4&F͆s=GKbH'&#xeJKB݌QY×Գ?) Zn=پ 畨OP¶/+w3Ā6ӟ-"=3D]D] :BW% 4<'e=bYEH9HUU2,VRiQtn@ALQd$޹HU~~zm1/؎m6KcyS?Cc$VXJp9ﭤϼLꢬ < RDmCBJ6J.eTd*PR6*B:Ep[VuIv|bٍd@{VYAC#o)zr_1xYF8ۡ9yiT䌏$:u w!SW\eN寜sI V:E 9ivނs)v%CI9D IWbN3$&EVUa~[9\=!03So-w\J 6͓ܣ ֿƟǓ u#}D~Iu4On8_h7~l:L+mδU!5qbē݌o-XQ@.Xǔ*i6PLABsbs2*@2i1 J$rNh!")$⴩Kr%DtC$GHFn _=]G[AqxDp}LG1PW2:ߐXfoF,7ȷ"YGe(2?XfP~^zOy8`uu̇ﱁ,0 @ YAaRh;8I$kw9NIci$kPI EG tU!ji_Z)eQ}ö3(YRAX ;R)RUIjʲVԊ *In6#ByBFi:~;h`$@T$[\EAA|֥bC|s(6?BV%(b4J,&x5y1ٜjQVfڟ~C_r7[''_g%-|D7~~킒 n*^s=sqLf˥# JVST> :!(5@.nb)yR)R."b:4[3%RxlfR1vbRJD|NUl Tkdl6# 97*Ͱ8 ha³bjD~xG}(R3\dllyFn;)''"37`J؄65QDm OCu +SRZUΣ3əU̞ط!2%,$?m5È&b^ j7[lڋHE eFkŕ 3&>Ǭ\CFV"LIBĶdLJ23Ci؋F]wɃccC 8^j.L.EHࢊ>JZ8+7Hb7 %0a! R׾]쁮]O;?^Jw]{fxT9ܺݼy:dzw2Fwilrˎ[.vSÝ/RyPorO|̚?*0d<xQ5y˞]]\^;~l_!U\s|Wm}ɷ5D{?nx2Tƚ*G>k/)S8OBĥlHk *1x?#b%"9WJ( pDky`@alIɀUW{EI";4Nec "F犎QHdKۇW{r$Gy~1#N'7-Q=\˨Y"6QE\A6؅M5=ȉ@(cqM#MzJF㤍WBRS_&5xiտ}03Odt߯E/W٧ ot7~}Gm|/^{6hn&k#[;8a |~1~~ s$=-:aBv,_ {>s <(x(ek+l¢ƦvFTߐJu`D8',jmnukw|[|#d z[gT=k~wFz~/N|ݟ8_< $/ݗ*w(Ÿgbf{n`'b=/^0FvzLG1PסQh]ٻ6$U'ukc7 NvC e"}AQ()(#s=UǺ N$vʉ2SB@[ڈxQyaujtN*G܊Z'uDC [W)A;+ƕyAePQ1DeO wONvPi_L%: , pSPI10H \ (1oia 9N`.0 B" T),VJ\fAF*fZPM@]'Q[M*Ffd j0DȘ1 qƶ;Qf,T=<,nд8OqbpB4˹[݋_7(x</-a8#@47#[^? ^""2ꨋĨEޫE#s);D%$ؤB:`^e) Ӂi]I#9#N/;Em2Q`^Ү9ΑA O`I\ Z!,FRLqVqB3 JGQN:ns?VF}2OǶH2#"GĵXQ#֡II1Vi`QJ.+"jAې lc)QHLu#08% ʁb%fX2#b6rGċ.d{u֙KEi=.xWs)|qk/"*^[:4pU}lܱ/xeJ@X25Ux\LޏAs#^Ya%I_Zi7nS@{7>l?GZ(%E#d HfRɘc6FIbKnv?X~Zյ36{O.yt+"גVWHԕg&)B}K bmӖe:cvtjVWs$nn{^ifڧ'm~1x6ӱ7t:YI+N}Nmy iD'n/>щbW[o?mNFxi7({VⓁ. ঳pC]eISaɋT7䍙;$iL q Bp> #W'\ߟ~8j-m.QyRX\T DV Fſ+s6XA >HΛg%Mrw^MaϮROvWaCvW:,J4շ_T Ea[:mh_TH;d^! PmQ+3fT(6ϖͦgEcxZW[t
ّTG 6ן`N}*;גR1kmlZjΰe$c(gXCɘ @kN s0w]);W}5Ӹ :: ]77J֤j9 ? Uʳ6fQW?+n^'nvXL(nM`xp7a8u ;죦rJVђ* ?G棊[2JzU|·2(~&nWvHSA?!)~UqO #r7hex1,AF9 _ZuiEخ Q%iV62)&Q~r0 .U.EJ8K#H GKREݥ DEBwvW9DQN ^g<\zFk%# D\ ^K0cD2Y+냉KM-a$EV 1̗Hif+tYmJtهp1Ҫ {sq|W)W'a- +O # @V$9kG B再{`B#X2ܮt(X䅤3AD[Oi!aG#yMޢPGW /SL!B:‱DM%s >D -4 L0`oAq<3-N|=#XKHDH2-sXDY}PIB G,ȓuƉo"[1|fދ* 2LJ \8h*0, )kp{py< +kL KlRuOB/D+aZtzR۲ l7'2ē7nw/ƕ*MFgFH %eĈsZQo,AHj!5lx"_]P/' nY9M';&1V]^s:sXg2lצIjhَ(JEL㘒F1R:b崓2l6j,A/ %t񸜇پ${R0^Xa(0#Q$1ϔ$KBo%"IO <:RGK/0`]"q[ѷӸ\DzNPiY~,yluoZ,rqGO 'I #JK$^"K$HXGWb@/$S&aJ;f2Rʃp-a^jF>wll0=:@l{v"Q2r%lueXlo!L6:^˵9l.79 Zr䃔A=rA".:?XlAOF b}頒+n|f:at,YwD@$[10$s 5 J@AG0 AP$LM X10q4ɔӷu#>-= ؂ p:!ά2Y` 7ggaT[&ĽTʀ1 9<"_)9 ITJ(KdmA**sm x*.-mPq% $8Xi9>A 5!jLZE[bC<Ŵm[ίTOZw6ԏ 7:;v߼N{y9z&_^{vo`0$A=Ea {ֹڡb MiY7 ɸ";ƽ%> q+cv[n%@/~8r]lBttk^-`O.|=k $+N}O1%tT%Wgs6G|N$r# )H6Rbtr :Gu#!@$jI-6HzӆJ|NZ=b6h&S$T4mX 7B^}n~|L/&'>?5+1ۅ-)]zږ[Jub{T>{WFOtB`wq{ۙOEƾ8Dz,_bّlԲ[va[R5YU|b"1ׯk bͭݜ\FԖ*?o>lo~z]B=%ԓAΡ^tٞ7ܤؘO4E"Ue&sJQ~RTзS;؁$K@S吊1Xl-;,hrZS$R|E[.8bhrP"'&vA&$}He($5QG LVM/?=ٔWAղ38ܣG鈪xV={{*}xU0`I:] pe{R:@'T7[H5Es̟g>*q4l:z=M 'w'R]w'ȟׅ6[8{6מ חy/qAG^*y]@t+V2+KvAlK8KT_-8y;S'_h! P?g^E}U|mo 6;[by9,+8_Yyyh}<ѼOMw7}EveZ\6e=Wח/}fUǯgXFވ{hYyk ;OgP|-wz%aJ N;-N8΍L,/Z9?hϻ@H+Ր ٟ]SjȺ KN@[[w[G[[U-&VЋ ɠIF;Nh}AZJ.9Q|= t[ 5P{(MѹDʱ(v1J-!G$l>"B[K/}p5_|8u?,27'4/Vk8)_} 5DJX_-1{&xJƒ*BlBX+k!0ﬓ uMQ30S}ٛv!D6 ޹ιsܰ\,ɸSLiҡ4sύytH<01p- TT5)ptcם{6DE6D+ǝGco:KN_vbc Qi%3ܘh e&1 JV `% )zB{#Wk6&Ojf`V:@;P'0V}/^Q/6(GX7:gj| A@)qVm:uVdyݭ2RTjG%)$~BtZD6\ʗfA,Cf7&:Z)v]+qUՇ&./ge^|ўz{=nYg.v*dڐIE0m *;e5ƠAAɰ=+lM9Hvf]2 v/+]v&y-2)<6v-ٷ%"l0FKݮQ( 抓u>;)A1^ _|[YTv֙8[Y]jޣ|/Sh_BF EYKN՟xmU5D Ȉ9?@&Qo᧟CBNQ"`E6ɸl%FXC&FLVB@&%JWOxҝ}m3>9`7HA ALŀ(;%UJ"tVطH78NGLtϤfY /A@hgEQ !mtHJZc=qjܣc% 5ZDk@)fxEB'n:1,213B]&HeާN50L]/'ačt7S̋ҢF͔ǻ:a6V@blEp>O}@2rL /T~"ӓ P* )FM7" mm-˺py\֝Ňeu=eucmLv~T>Jp5=4`Xbj#Ā@=}yOأ(Rsqyi.׏>l÷mnnPN=[u]Gll=d}Sr|oXM! FdR(%a6;HSc 69!D`SP0K,>/9JL*Q$>ؙ8[Z';ޱdrU6,}褡Ȧ9EK*|*fi]Gޚ5!;UZZzho[o+PBZ{4Ņ=֞+V]ڪ]Dwj,l9Wguޕh}4cɯKk^@ f+(Gqޘ>Ej6f6f7uIڄ=}Oz0m`4n 1'~2%m~Z"RUM?]\_󍐅!\ǟ!ty~˷NLڛ(,uitA)Ph@^fyYުb:i PPeO194h7+e! -z$[ 57F?ͫjv+tc#)ߪ FZs[Dҫ+GzhPoֻ*e} 8hUWcqWUڕh]OUrEgwrܕQ|SdPG㮪 ]UiRՋqWVZBV'C;<Ӫ|鞾it}34nnvK}4YFJ<4FM-0N%)Û ok=iaŞ B>֟6wjiQ06Op2ZI{wƤ|0bωH֢E cc-E!\tpj^CkS?_' < s^vq{ua.Sk=um#>g &6)]^}|K(@^E@i( #bZ/lL$(ϰ`nR=;ǖ-RaM.?On3ndT;}>:}s#U[fRHh1X|Ew_lk>k&6bV&):ءb:ǓY 4rfj uy|M gdBl:JP]t4}o."OqM򦙹_[;WRl|5is[-0( Sdz[sn]^&ߤ/j%9}6QM^YFoUgw)Rʟ&ܪ 1SGYuXUVW)/1jn`s\1S]d򢕚ӏJ5B`'yyה.S;Lu]]AپUWiT]a'Y`*al*HEфHd$H'K> %(^P ϼڣŇ"wQtm;K-_} X`!DJX_-1{&xJƒ*BlBX+Ʉp.Iu%ꚷ_]y P6uM8S}ѵvmFڶ-CBmV=wLuBS=S (!k_u|/טRtRydNfoTݖJql>I r ׄ=֬QBT)(L1P ǣsf^1tꑧ!u~9n:*^-g!ߣJk51-;}RW2}:chCQZda FS9?dUBb9=]bLh1@l͚jEg8!Pr2-dg rsb?Y7/8gQ xiRSe2И$Q@WEC.s(6ÎM1 jCdq+":?/d!|LdW /\;<rK!± mYPc79b  c{ܒw ]#;.+!>ݑ6M&wi$7zteʒH9䍂M"Fl(1 I7ͩ_f;n"i49,|sHʰ I{琔:@C)GiVzmB#(֤D24C-2eM+a}v\|.:U-TnR :;ll79N{?0*DwҋBED뀨e-"m8@պP!%abEsOP8y%UqdTjO>Vɠ7&WkА$T+VҶ6RP.{2}e~Co@cB*,ˁoUb<'ߝ]J&NNXdIohuW?+۰hbhV RAx.iИ8ߒxGŋ<6M13=bMn с)nr]݂]δyyl g|!QJM1UT_aهE=]8S>5EIg#ʑY q^ " v7oo\\^L, _vYL¶/&x4"F)d+Q,yɯjkj8BjX*[ג)LљБDdSPtR 1|D#KiZ !3l戲m4161 C0KZ{B-|WkOw?z {Et? Ip>oD}|gFF>YZ 1G_'zTʭM Ac YL^&u'򊱬;-l,kޣXּ5}RBnW-j7ZsFiQAN&mAP?>{FR^ ~Nl耧\'VESPbA!;g@5xogsU3uEUk"ue2sy߰u1~:II.2k`Rȥ{OSP9mت5l* sUt]AV ]PJM 1m$98}TJM.TUC:Ъ3rXrבM.*NrJ!9J zeb?~[oʪGlfoMiHD=9|}SWm^㽉Aկ, H^ L=UfH?u7itzt1+[.OR* ahyPmy[qVmJ1C,hT]Vlc^B( Ns٘3EB \WcS_xrr}WؗPbbń= j[H=߄ħbb `&ijifM+kǙuiXNc:=GUЧd@`ZkK}&x5,`LOPdkl͙,")6 b2)H -YߪR/4uΖlk!{hؓսrA_O|N.tȕ%L+*|%?-AmlN &gJ"Za -K u|(Vva"pQZOqjg%#yO//~Uj˝Û<{zlqc+N{957G'T ZgUbиl DuJH5+CjmT*NnfBLk+*JfɦU68l8[$rUa0 0cb]ٙ<&W.Cy7j\XoU ONξ/.?8b 3%Sv&X34$(/!+~9g-T| z'ru)s BͮT(ͅ6fqoufĦzOb j]QێEfԞ쪭_׈mM, 2EhK5fi>&bPXxlSF&6BVfhhkNNpRo aբj0f<\b(`Dt3"Έv:pVMڰW_.s2UEJu!؞-٫޳Q< Xe96Vmi)[_ҍ3Tud`!9xta"=&t\{9Jvevqqu$TDX vm)YzTj!o-" ̸/x8:wv{?e9Gn\ |92* 9mo G?&JFW B :QFӳARҙ9q>\35$Lŕ\Ul 7)-P@smkٵ9qtG8V3\v}ug솮gvh3,ˊ9v"gX?nN'mzj{ ޶Y/2Ni\v-'my}n/=5uޥ-zr={t}:ͻk yםo:kܣ6/\5͛˿im~.yzUO'~Ӊo"ٗmo͝/p2=o-ʵ|FKn8pD7"Sg J7Uyz|F+CI^j=݃&`G&WAk1AF@N)-XؐM)s5&ibPUvYe/yq[ޯ'=or|S?#EN c%z BeFAnf< o EmS%Ev){kCrUus[ᒒReߢ}kvR}BG9\鷅2y>b%Sޖء-6Q_שBnǘX,Gjd*ٵs`uT4&l?^+54.Dsg ]\zVBό<[(ɪlkƳm.BM`Z^gG߫pf<#t(~7F5-Ov 9gZ4@1TkF:ťünaXKg}H-VcqdhӠ)ioB| 4Ļ_7 :8dosjSv="x51zP`KKJ5 w96f_jJ!pSNՠRFotb LjNj5Kև1zt<<>mg_bq(]E;/zaC<уRlQIsz?>.+$_?/{G?ꭼ)$xvr?,]}9XZAD/u#Y_vsGEq8$ac}Hl+RCI#QPޅiV]UU}7=xԸ"&<(k"P!5I)uw^|~'15cAmFJsA̱a )aT+]^uҦl+$?aWl_ LJmRj&Z;5֬}g뮊ŎhT׺(]5kG -ujqֺlzFq.X\ߩZF|vx{ս2k u$8Φ NSKt˓O߿L3;mxS _yNN :H~4'yj&i[v6 Ӄ^P}ąT׿.h`gO_Ǔفf(,*?;mBo/_\0=R洪R9.& >NVac2>;m=gS(UH}re2'HVj\4ŐE"b1~ 2I735JE-慩|32{Py/vm ]!eFIP)wd.ּȟ\_vryѴF^ibJMe@F6 9<("LA*DiK>GzHA**sm xJ+mPli^i9Vd%#(|u3(ctTX'hlG O~Ix v~jxӛ 3>= =B1Bl {6 D{W\R$ѣ`{-*a 0EdIA)nL3`'{aJ`*sPEZHAdzA?qb;::L+(fwil}tyqj*&R NNۏJD*S=R)hr {TMŸgpN4 13XSS'#Y~߼^:zo10zjIΖfHw3Zu6^,oBO ǣ|mN:@׶Jͭ;jW_iv=Ra8TuBc>Ÿc+Rk~]1,ǿ;:|W/7/b_|ͫp F` {$Im#ۀ+MwhZ47o*BӔfWiWvlE2;p#@+~>|L]BHBttqjF?A^ ӹA-h_'_E2Yutu`2UVy󉚿-l2(^= "=F]d*+`5砍]xOɐS{+r~JlhWr~4}/ၱXjT ʔV&T>zY`׏w5{Z?NʲKۭcVHQd"11\8FeRq c\G#m+!ˀATbrH8+CNۺr*tgNj΅E.m;i" IUp;o)*0?3`OLj9ԫ j-i"]ph?Ѥ\zߏ{`[z"5֖V8᳨@;nYCl&Uhҙ8a4ELK/XEH. sqt)5FlKnkҿ5GĄAȹ"62/iCtL!8bx!c(*埴sVA-m=3 rH ÌМ=[{#g|_ B-p.s z}SZLޭuSo0؄6Y`ja-dyaUU g3 $c$j9uIR3jp&dũ'{)^{Pvu7̮FfˮDĮFek"w`W$us${D =v%D-UήLev +‰$D0]%rx,*Q[g'evT]}JSRVfaӧbjFގ?-̑WB'h< tP7.da0>٤_׬Y~;tHy?FG1EzUc(RPBb~P%ln16+U?<{B`RGTRJmɌ6>)F3+Z<[.d%2@JF/tJ]M"Vh1ɭK6qP6hE> YDERV ~/&ia&ox  |az~ 3'x2'x2'x2'x2'x6;x 6-|W9Uw]|W9Uw|W9ow]]|W9Uw]|W9Uw]|W9Uw]|W9Uw]|WR'r_?U294zN 29LN d騑8Q-/̜@&KI9LN 29LN |M1`KKhb%rգ ;r01ؾlT 45"mP)+_bڃOn|?J o'U~S?wNlKҬ>jI- nR(#`rx|"}$IEi Na/̻ QNRb8jcIv(&ڍ `!Zo[1[8KV:DQ%h9eE#zPT9F$㑅H>HPFDD b FрG!eLDgHoXJ?RlB>N QZ>~ $ ZyW;as֫Q\+%c1xS *݈^'Duc"{&CD)1H pHnP,,xGI4 GzĽs&^H]J#}9펖LGUч+e򶛀v1}[ێ6*n/,꺚ޒu]'gCxބ$d4=hx晎%pP2Bp0:?s5-iz[w'TU9|V*.)2^ٚvݫ{ ڵ6kfWeǢz&j}c A55GM,[.rc%bD J0NxQu|='yyWU}RS Kp YyaD[)F4-MO=(4!S ؖ U`hi)1)[g&;"?#)xA &*pC5V("H)eѹ"Pb4$Y6Mb8V{4 |am_>o|J^ +i4i‰aETn+F)*FUfM}DUl0"ΥDXϚ_QqLTXb ҚPbk,RPR"ӟ(8 & $}U 3(Qt ) >F1( d錅ЂAN=VXcюf3P~"LOZ{qo M9/%e8.HI ?YFSM3*+?U\?UZmD`ҘDX"8!j= .ɆtYEQlFxٙKNǜPpd5cc=.NN Av=X$B^e6o@d)HRD*A %)Pxk}LzoR9a%p+g\ⰐIAV/+V.gxt\?g-K͋,!@RIc( ٨3 *>뽄R֌N!I $mvD>xQiJ=C 7AFM9M1\4hl-yRp)K$p Έډ Or&|mUџM2g9cCB >'5\(|IL"U&(N:.)֫~btsut-')eL6SDs%ˈxT:*9y#ut2cMxyֺE:dmգ-A^x'~a:>L;Aӳ3G줌l+MYHYK_JjQc16)SRZU5!IΐW|aNXܦUn΁/ J݈.i<n6;EmF-Mê$ 'A5&>GRE[!{#kn&'c!b۴':=d a/u5:U@dž#c_DŒ@v3V8wꗅ*0M0iq("ƈFDq.TR18ѳOZ3} 1R:HY{k(#<6E$[ްo8/5y67*$bİ9uv("{iP%ll&TO3ĸ8:VP\ld#.n* Z4QƆz\$&-"Ť3kF\| \<D7dN_7W?+4E$DR~tɩЀ樃DB% Nicɱ䑖$\E=JpV(,v#Md1hm)^ؚ ҵw?Ykg]evG*l;!6r`͠Bۻ/X RmUuǭ篭Epwtj;;\}ƻ;27ZVkwϼϜߋrO=wt<9++e<0\>wmm>[}{f,$.{﹍V`'*9_oq;o쟯x CcM HT]Ѹ'eaԿPOs!ᝩgWSץEuڼkHq@#dяA 'P[SOz2:'$3JtgB49wGeHhQT)0*CYE("6ޙh&Ξ; ]^y/JǴEq ,mMy>ή[N9QM?3y*TB#MTT~FxPA9,x+%BQU_FDxDsʯ'VYz9 Bqo1:uB.6=ĐZ]523V+XԌg>I[ kUa Uc`h&n`8O !歼Jol;kwcؽY]9`Clxn+0#D,%TRRD jpd /f(XF{ \\^=gDgg,IV#B"jne _&%]+dCD64"O:Jp<1r|ZO`i'/X05ibu/$[ĉ h+ִl'QezzZ :`@6L&T\ s 4]еd,J> ޟCۉ[ L7#R]߹Ek6κG?ԑӈ~*oq}SU"T{6^n{n|Wi>h~ou::V7e\#ؼF.,jOߌ4 nݛ~W|7u}f?_M~ofJ\] Q$7gz_~=z/}5^Wy\;W]JBI¼P-}=e4FI:Y^ogӯX_57z]u0]#]]2UO|_^\.-Jm`5Ro߾Fp*UG9N{z /&UpVQLV"+Z}cM㽛%V7]/5>3{!o_N~kڪT֮sm(>= *#jU`^,wue S)<~Og p>~X='t]_ͮ'gKg_0%e1Y|Ҹ[^&KA>^}=y],e~ظyhVYpO9?b?̩>L.lvIb-na6\O1nNj^Z7HY 𖁙7" ;;bnO˳|x[uh" A W(θ 9@|1Mֺ:==]|i[h=nA?+gro4$v3l,VE*lX4(СA+55J&U9A`RMKE 7lxzܪ[E@X z$zJ2$c+AypHI%Θc[9ӏSU{nuV兡#a M+;UpZq88V0chOl4(|H2́F!f\~k' afO&h=.q+Qj0>/p(L1c7~Av.>Mu]I'sɯ٫4Fh}uPvBI:1]̀ UKFӐVg=Lƒ K;Ю/]awe A;'e!:&R>[H\T. M|~.Mr{ptZ{Kwvb@~yW%k=X˥K]wltrς8v<}?=e:&ovSЏWz?Զo۳ő;3wldqo+ZkqWw^a ۃz&/1UiE\{q^NN*IcP$\,F{:<5Ɋ3R`6~CN8MϵC^].zUE*g9Om5լlwWߟ~n+$p.gCQ"khV 7td뎾Y/||T}]8>v>uj׋6Aӛ~:1nt;VK{Ll8/ ֚]fdMn~{DWl\{CW@h$Pz}/ Ab P{CW jh5:]5ztkbe&I>OyOuW/ӿ_Vc㤛ϘϱW2eu,"W&N8(e 7+¿_L /l_/Sjz]cn.:} l$pkNغ/߶u<ޖw;r4?;XEYC^yޞ %o$k\/זYS3^~[ѿv[HcN?G׳ӂk[ͯ/{=`?dmcHwy,hw .wϙcώZ.|Pl7}햫i Gϖn(/y>VIyoZ̾HK5oF#jjӭM__0}{} ~g: ϓlҋGh|2×UCs%"|f#Sy$9)^Q0[CeA~X rR$ %'DA7*&R{:3{< ZӶ<W+/ʣnוPO΄Zfh +NX/tAvRiwHW͛꒵m u{CWioJHuj(Yҕs#`67ty_ hOC;G}t;bI\cZ`CɇdCWCos,ܳf_g yfuZCW\vtetԡ^j{DWL_XtF}^g zt<o>ڽ+eꪡT@W/ nߖ`ujh骡\wCW-ξ*_'DAT³[?6BHާ${C WM7F:M7VhҴI]5~`uboTeCv~ή9IW:OI0{ECWUvtvJ]k>];`ƙՋJ1~9=;k ~z#j8% 1N:C:.϶M>^]%VIиY8\\w}cdǭi{p#ϧB1OwWk}C3phmM[?& wXw7oMS\].㎨ rT kwn4Ӭn~'vVa"/4jJŒ TlhsJ́d.ZJ9[UhoS G%l ŐZufR5?cQK5 R+6VFPf-a5FJ2TJ3+w%CAQ*B *!.H;lE?ՙTMBJcb$ J@F;0 & .GcYȠ(fP_ SPV3$_uȱLYT@0+k{q.>C AvRe&fH57]? 1$y0 Db(!$( <,*DJV$%Υ" Ag#vTC7wⲽpM $*)I0 9ʖD1 R rVB@'ے! ro.!MEwF#%Bn7Y LiԞmu`Rl il?RDBJ>YA2ZêQUDI)beIنVDFD0 ϋŶHP)V !ѿ)XAAh`8Hl,@z@Htv+WmB֊XJagmфa!<04k=U/Fإ·JIC1"5iw8 `ea;L|0IgWm~Ixe.RUۜ|6>V3.PuOK/T^dqp wN <0fo9N Fsȇ" =4's_;אD[iQ.#D=@ 5$ *Pk5@OOk nG=L5D TH|_߀VW "y>5\Apܙ)NÐM^g# \ةP !0-cx)U< Uc t }g B.-2h~ 0aIA=z!ǀSZ)#ܚ#5|!tԃgRSC$d˃L!LTc@u+k$q{PIp^-R>xIkJ>7ZU5dڀ@wePXiK ,+ С`h$z kSa8dh8w{ j2A)0 2q,qJ2\5V@0 w9m:m!:J`Uݨv(:ec,`VB4/sd$50D)zP b`ΘhF -;@gt0 v= 'B6l\Ml18$.Qil+%\ zq XWy?Ĺޛ:TN U,"I'Í_`psWUH`7x1dt&{Og_"])OѦB70 ~-p|1:>1Gc<g em_hN3?za'(LWy9 1-t9h0~j $B+5sP?q;|?Pآ@d5'uN | tN aON r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'; $'0q~i_XN P-9 |''9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN ruABm]IN g`8@, |@(r4 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@:X' ' ֢'5'N P!'!:Wܑ@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 t8NSK6y0|3ũXvz}ޞ\P (w9-Vqv1]>" d5%@@@c7.J/ɸtƥAdU6,Q պ޿q TZ W+/W \`o\Z{\J W 1*; vBW;RW5L W;g5;[\zuAZbpr-WKi{:P儫ĕ9%EW x{h;P *w\JW+/=~O%k}T޵m|O׋i: z`6 |\KOv:[,A84_P̕FZTO˩ 6/wɋ(ƿVa-f|] GժkVNJ5Qi_M;[&;E9ziD6_zu^OqXcyC+J7JjjD,UjUED0O%T_?865oX%+|HЛIQ(eTR.j視u.QeJEZ3NU=>Jq JQŤ(כRRlP+{*%<[)#,WJ}t+e)BWR!rofo1~Y/޹<4r+@ ^nY9ZǪU*LTB i9O `>[6Mp5'P9}tš݀#m<7Knv~t0BDil *Q-&SUrM>@Jا +_ P Zw\J' W+% rfŠrnفZuq*%\ drpr(W~4qk `]N2r-+WʰP' 1&4/4ah՚ArXGÅg68Rzu=-'wo.ߡx.NΗjkBm8^vrv1I_-a Zm6*x=΀yNO)Z`x D 6ͩTP?j[U;j[KGM+=Cvmn~^.hY 8ñlF(lyZW*iSً?iyopb5.i hsgGWgGm[þQW몧;n_y~>ჽj ^+d7fwSt~7gPUύq_5y*@\\)KUBWW7 _ P ~OpusUR2׹Mѥ պ UR2x·Jcg1LY'tR;[i ´7–i+U)FZӨ(bZ[+/W(++\\'JyX*WzyĀr9+WK}:P4ĕuy&ثbpr%+B[o \S̖]`.˙orU1BWCĕw曡`ϊDWҎ'= zdp'=j7|Zv_Mٲn\9c{ /W Xs^ P *w\J W+)/Nk덛yn%H: &X_ho ̨ėTˌyN5=єj۩:"]TjBI/ JAڻW`7bZ%bJ)>[zWbW(ذbp k9TzE:@\)+ֺ=si?V[gw:Pw( &x&R0j;A0} f+˙(Wv_vS))l[kZw<UTZU*Q)Mƻg'iL5r'4=ioZ-ng5To*<Ն׋ ߵ*PäA\TZ~e<>2i6g5^ O+4țv~>]]])2PqS˧㦡y+oAks8WeD9k^GA6)d1icLxymt[v ER߅DGշ?q0&mB {yw9fu>P*?lxDJV &!N׵U Q9㛆gUds5)55R: ࠛq񖀋NZklNЮTdL] ;kdd ( 7l ?-i%޿j4".'/j*L&''v- 10[x% ?Y* \*elho:%tQp0 I0YB Ħ46U61|V417,Z1;yb׳iJ1BZcͻxDm`hI4u)j &;6ZCfc&i|U<'؊\+|tƆX;drYtkRfш,\48PF'![u|mR?8ec_Yx,EDDD"w45 BNڄ& |8͒9$lUf 22`ޕq$2?yH}$kc{FpHz!)j(Qmfz{U]UݩD 1($^Hrz& &€dKgu,;#gDtU[E^Ouv%Eֱ\Gx8in0vأEԀD[:}CqǡC<܃{R\WqYLEFn>my-+@uw7nE^rvRv8hNpJj3 V>PM`?{>}ʾ41ɜxw]΂ΧqSy?i)W FEh:(k5/{gtoRo|gЗa}4>pN7DgLNC*'-={ ޏ i},\hR%j|KTcß2#>gmaQxeS%2J:3'}no*:ԁ^༲w=yeDK^[aFhw;BJD))%SbfRɘcP6FI+kr>W.Lvn^;Âȵ#!Lu$m}թ82ŤX}!9.JUp ˬ vhBJ6Wnxռ(ZY٢楒a2^G3j߂c[u*a^=[:A6fB|}Z'OO07ea,5)LsϻͷMQSj4%Am[@HT!fRSgC/zʇw훅M]m J[ZtoQN ^%ug<\zFk%# D\ݩe0bZwH# be}0z)#"b1h#2&"ݱe3rLLqx2|Y.Jd1P,Xxڔ[xNa%][k>㡍誙Ep"A̰1y \ORQ[5J("/l| G+. y!)LzB5ŽFE*L<iX{))P8%X;Ŝ(Ę# ,C$xދ2&U !Bn8ϐ Og}g~)MMzy Тen? m"Vkͮo7ظlOdGx !&p]zp;].Q(Wu+C;'?[opӊ[/˟1E$uo<&i)Dٛ_m Ari>Ed.jil5җ&.S_^W &kOנ4ctϼw5o 2=eC_~]PUUt2굿OFҥh7P5xApV&T㬶'7"`wٛD3\v6L"0q; %Ѡ.x^3IutNR2@[l6 #ؤ|J\n'Tqk dHDta8Zvv#׬ߖC|q{mg!0<%1["/1x'sL2S XbeJK5AB0:Zh:1Qk6C(h[׀?9샿02>JEL@1%b,si'9e"m6X~zā.E&ăHozؗ8þŅ/qE)P_'#(Z n^! q>r(5q]$@IQN)93Z$e#cX a[G(:FF?K|0]FNN1F: lp6HFZ{Xe[m=V ȯܷa=jC65g0B@]KJteO Rзka84l_6ֳ66"mL%j`0ۜZZ m#z PHr{dm1 gR95D T4P)2JcsTǜǜS2X9c A $bpt3DJ;!=tsL(QmGs+:2,"1rbrX+H!];#g$t"̲SWz|q(S"S"ރYغBbv19br*!f)qBxp1*Kyid9k;KZP8s?8>`q93w=̝hgp2hPT)DۘNW) 1JRK#n(h)nTb=N6ʽ3À,0I:/Ʈ(슜morә^ԟUt| eT;T-.jPug-Pu,Pkbxu ^hx`xxhUyuHQƜsK[n Hk x~>!b vo;l^r% }yN(^?h׳02[-jݎV]@4E2(,lGG"4CZiD1%Db % (WKM)0pZF쥔^jF;#gs>? Ѝ&ݥȹDI尴ƽ74XsrO嵬hfΫ& EJh# q4YP4dMY@K:4Z'f:,L*#R"%1dNF0(pJb( 1 L (O3@"qq%C%:P0E=d*OXVp:!ά2Y`^eKtdar~_K6{iFe@F4 9a]#BTt:@F_Hih !/\n!,-+M !3*Y:gߔp} mϒ*eHVīظ]/{%91'0դH 45LoCj{ֵ,X*?".H&WΉg',ʓa 0EdNΤSwb"\I1xq:5@0*sP FZtxT\\ wKC}0x7G&)20BJ+pM)Dk{UWJ^oa/#$T-NVɕT]#DJ5$1u0yQowJYU<T[ueylz^_x n F(N\aΫz1KFl82~/;g $mm[hk7Uk3&$SQ)FUffEtM4bզmUEE)4Tu5YgjV)5 Uq?1}y߯޾|71Q_|a 4J3 ݉@[4jkm4]m]vkG!hTfnw/χ~Rxe[ ŪY큅krGlyv{\]ŵ̅*Dv!K1d܀}Ygؐlƶ,} W%N6U`R`B@cHr ZK#aHH$ZR$ݷyy9},끧`6h 01Hx,0/@bPT4zg3|dSR}t;@Tu};psKhҕN'ץpaDd/3ꅜ+[土McMX̵Q*g er+gI19z%RNW{ͼSҋujΒN 2˦s汌~Z3y܈`$b8nC@:Fޅ&p- akNE]\7&r18:e BʹQ]ǃY+n8u~?PyC:<Ncʠ(J3v}n ZGK!3jg>sWFp[źI.v㾁bP}MZyf I:lt?_(a[Fx[GPPOМCi0WD{0{7Byl\-JnH<)?=RgO~+/Mfa(pp1\ލ`N#^3gp'r b8_p7j / |)YJ%KoDJyv ̥4uj9};Oz{FpDEp!/ NBxpZ]Ee 4~ACL_mapUBuSOƓ/:4*w,vN=)AW;sȿFzP:)=)@yLC\'.gt8N?Y9 #$bcLK8_0.;P"q ('!<'\NBPNBFNB LKPc*VCL6d@{휯&"Um-7F@& CNbFs%T#c\Y8|Ez̍W1No)7!|dzvv6~l!y}KO9,83[LZL&fK1,ޭ:sW'>Bkue}|@{u6^\eҮ{ҮlOf&oWnʧ|G&}Kȥc]ǔ1IRo&%U}21ʜo-U~eA miܕ-}ccQ_ۇg1tA Ef/Z!Mo}˺hC$&("Mr98aǏ3yb9],{pٓLoPIN9~՗_bT!cq%.x'A3DA䜭Xu2(큋 PL9xT ꊁ7(ijiīT}o|7o}nO~J"T2@" $[sdy/QآC.y{qU$X=BoYEڼ֐ N&p/'po'}}#k }f=.̤]Us{ڃd&:`6"t~R5Q5^[л1[x]","pEoFk #̉ p镌ؐ%V\uXM3s#/#ej߁QopА"H_}n.yDt[wK(җQAp }hO;%SVb.zjr\\!Z;ByܩV]) z*/ \qʋtJo޾!:RmYAHCvFYX3( |HN9+bc?PѺ\7~^q:^< .oJc15ZZz6I'=;ZBb֪U\3T-Z@ q$.ɩ$ܢi"^cP{o`3~0cDtla rP(!ڌh*A4Y cU&ooFD\az3:|rgG5QUd} HLJB5*lLI5'ܣ?Xc]}(ћ;laqffq%nu]1nIY-nݒ$ f=m6ZQS")[nzRT"KjkkZѕ 0)t9cuk8vŒB{^q1`t(ElD c[\.`=j*cSP}y? Sc2B6PiphBag8=d]!r#| ?"C>VTT1i!iM%BdHbɊkCɡڬ)Dota|_- ~&娂Qֺ ڴlX zSˉJ>چDB]qUI-YP#?}>.-jK_ V/zAȎh,*MYhpDzz" }zZ8 nT4>~7jGpo)f7{i%g/pyٚ<72>b鴤*ڰm "&쮯.g<?C~ҫzyN?_|7Woh<_QJ??]^VK[W:u^oLV :4*w`!}zw9iŬܶKݲ:&?{!dobrm&:$_Ug:rZPsDzUj0SϾgU;SU Mه`׽lkAAwEc0)E ;-B/gu$vy[H'^/JT>@-haL3 0hv,t%hCDWHWEW.e<~ݕ)w̻N`XY>DHkN9W%?G:^^5g9NN#NN}m܍FD ;4E .ICiZP"MFDWi?jp ]5NW ɏ] 닮\BW [骡D3ҕHJcx\;%OW %OK HW4Y7"hj,tqCiۡ+̮+Bc'\ :]톒"w+=K^w]z-] `㡫ٱUCjt@':@bhDt䕂UX誡P 㸢+V\cvUCDWHWH5Gڝ>ޠ"vf;/1wK 7tn( M4}4mW^㡫׍fr;H"%9&jG5 W] ]2ލiΎ*cFCW .UCOW <ҕW?"jɏ\ ] ZRoPnjtz^)f vD\=> W[ &zik#hj׏} %DWHWɏ?jp Z7誡$7!ҕ7^j%fϵmJ6?;XszTbwv^؈cXj'5Sqحˣ|22ۍKy'DXc^3iQ18Bq6 tܮ0(1,#f.fY9s_PEKWo㐲:Qc `I5Nz%5^̿khz#>ZGCE/nTjJYĮGQf|8W%CLHώէpkOa++)l},&ba@]JR•Ѣq*R~;K(`zv~6?ٮW ch: ,;˓.R^̯O9?ןoR?2]mo9+p2Y|708䲳7` Xlb]dcI`[-9j[h[;{uYdWS,V-8,_ pM~Lak vy|4$L\#jrVfrHeMIk3. OBp8c?h<kז%Udsc`,y$^MyU+nqYu1ŅI}m?,x^; ~pŘc?8φa!Unv~?u∖t F4k}Z=>cgu~«Wa[ahǣGjlWDty'^]8M~udDgKRmn鸫nfV mLi;?Q,t2Xιae'ZꫦZp~6RV>NYLL|CCiᡚtFA:?O?o^oz797?O4g)Т0,¯; ri^r]e-fMf;|,SWdv.րx7˫IhɘKlj$F<*Lד8˯dǗT&U)"q5#N!ޕ<ҍڽvO}~O[/* LAId/$:e SbBq4q b%L JEnx9dggГ6,缦5qP cZl/\!=R]6:;gs8㦎ɱ."1r#"5ͳJ4} Jԩ3Kg';1PM' lk7[q3|[;N¶2O_#xtvP2_,4q~~Mp<.LM(@@Mmr;] /'c-Ͼ )OY>pw'W$BCFCg(NY{~lVt6>ܨ/ՏZ=/Q[R?=ΥC;i|Om@!~ke3:"DkPW<EZPrG刐*[.א%9Aoi RrK*I6R2dZ*iU2Cw8NdW;=y-Bo2mۆnih똺M};lӈ' C #dM?6($* :ypHA M'4Ui]QXy`GK%Ƒgᓑ@ ޹") *WuĹ]Ao3*nnlIZ6ZY6SurEPxm iY ֶ7Kr9mF/:>g.-Y]8DäQΦ T Qu H']1M̕"p-Epk.c.& QZ9WI8OrF >+kgZbj`4G߽zwdy.эոw>l 4vu5]Xy/z7^־0ȱ e{X&b]۵ K^qUU C'zI؜^&`>09gjȲM,dz00CPF쁎K@$R>ks>ho͜^l]R6p~ tEnm h=OG &3 TDDIbf5JNKQ\XM-yyi?"ZV. 4|Ls8n{f'2{0=69]1=^}"4W\HV3&Yfd0|d6| r){{g?޸ Zԏ~4+}Vw1NRc$MZѱ z D)N"ch6%é = = %X[eRqAy4 Ϩ9)cX(U} H3A*˔>&̂!AgNJHhE62EU Vg˃Csr:rO>Bs@`r׽\֯jw'pw7ǮP&O"髎dr֚|NQe@%fl@k^ 9_ՈY4)c,-8N"C7ϕL1(9lĹ݈/Q}\cSчN;^Ѕd8%v1{֦f:givO]-0is׭jT)3b^nk4F2eg?.O`*ϽaC ]zbGuYl =z/FN qY*u%KBo=`)oI:=<چԸI̮m=y" 1+c8"[=2R\|1C 974a#Y CƬ.{J0~o+Ń2{{RMk <o[L s̛0hF BE7 !Kͷ| v|+o'O !v`,3~lubgG{8O# LHN@H~n")@ seK GZpp\eo=:9䘄D"惎Ĝ8!!KZ U +[p5qn8kXsm:=]/=[zcfNRxE,YJ.ƇVXX~tL sChI62lSxsۻpj JQ79yI6 K\x)B DEH F#w`@Hf 1)1Rr, {]K'cBi~BmXM!bC@Rm֗Q^t REVMoʎش(ީ ȥ*d:ВX Yin IE: 1}-6׷Y͋w^μ 4i )1 -NF|F9[1r*뮐2 tSƃ6 6!@VV2mJj`De;&Ζvn[nPT/B\IC%y \d1[Jo%8lM |^1Qo1O?uP"YRADrV C8"Fɤ5S68L%z(XAp6yRϓ24t*g΢9&GDz,KCs 7Qeثt;)` V6t@NRKT< R%n,/uDK![[=VJߖÒgb|T'+\V<Mq.(Qk$T<#njBH!ǠY{g` ӓ}jr_PCD',~Bڃ薦Y=nD?Y$&OlbfϨp؉gSU K"-EJ}XX 0~T!|;WZ1209@S )im}]hۣb! C62-MT(Qh͆'y ,a3eYuY"J3€!Yq^Y)Y4D..)}Ȣ8eZx.V&}.1*f-qZlXä>ϳ.sT,&m4 Kw}ʂ2cJE! 5)uAzh*@9CGH<Ǣ킪%ΖG 쏥H5" RT'5fZb%sVy'IQ"O( ؤW4> مP|K,TP&yLbi3}|̒g E*rީ[ h].&} L䜧^&sN O<$^̡tzɳmll4u_mEm_ mA^(n}rk/`ͧӯ 2mIаќ%birr+[>aڝ{V?P)yrAGк+$k ڲ̬EǼ!"$DJ-rМxZ>%&oFˤUW7?v |/Őq#ңdFHY.Bkl*qCRcoUw޾%:MK@:x<^dZk(v-|'7ZiPo.ˎNΊ$b[Ir }S;tcǛi-Xf7?\if4ޕ?": _< q"C'>Fq }V^?Bwu[-7M>^<HSJR෷{p:=IwT #ATC=IK~l*9V_>T{cH` RI{Mٌm%*fs^޹׵`<hj j B2CH>0ԻQr&lM]oƲWr{BgisдӢاF\IN Erll*&#("ogvgf56J"Th"M 85ڤ1͋smQn~W{?s9Z9[$% +%ERiT7h)glx ܾ=11`8$H:IT(bWlJ%n-grwG&-bxf,ťћ)hyXݵU%6WSoAwf7C! }4Qfr- Ls & IF51>;„ Ip{ixƓ٫SZSy0<* (jBX&ɄqF8Te]]#{!/3|5"gQD"NhAjM$A (!BXbҠSQ~76[w_O}5XokG*C@5m vP KM8i:>s77n,c2ZK;)Q)bS[ãm"4?'kct#+9 wR>=/g??Sͥx~q9{W1ߴj[3+gpǫD׃˗?E_Ĉ9.wVdW4=k4xBϖ/~ZVI)T2מ?Yf+Ⱦe%kO O/nr|z&u^0=U'&0Ǐ_a w2\]UU`R>࿣Uo"AN3l@EnD R/gK\o&25hxrVmG˚8&{%ͤR8J LYr L݇U?$ӎ6J{)H˧_\>XqM(z1"|4D_[1GQJPbbVswD;Wzee{ԧ%o_Q1r8AeLtZ7FXx_uz4~;Z!66~VVkܘKPfp8iX[oKZQ%ID{#Oɶ״GӒD䬉i=_0F*G&*Pýc4RJ9 DMg,uJujd2-:[y`VLmR L:9}9+LE%˔&pUN^%c366m Tx.opx'ύhCٝm/X5Gb>Ɉ'/24.}.Ɠw:80_W0}7C+a.VR]X @?dʅjμgҨNzZ \瓫5B7eKpF^6Bܻ^)ↅΊ旪NygWZ-.j2 ^P|ËbvtUv鹀t{vZ4Wk 򤨕9-⋫!~6tq.nXgZ>d^LPev/}&$["FH>N#H=F4o$ M9h0VY!$Du2pƤi 0)  x} { I($F39R<9rfҞp#eGA5oډkt^qL33eR'&H64I#O(1h4:bDіY`\p՗!טCNCn4QI}@&gqPY{o5طw%j=Hqİ4$J*1FzEyQ6gmYxW1{PI`E0U F꘭gѥhrNR)\jU$\I\hz.Q>5+<¥+Dh Qjҕu.iWRBLvh=]!JkWHWsIrA ]!\jBW=]}6twz~J=:]d-LW5p?>]RX+]e!L•+thj;]JNLOWGHWLPKDWXժ+th-k;]1=]!]q!L ֮lYw ]!ZNWRtut%Zt3tp% ]!ZeNWО"W|[W8T7kQ.+Y[l;v{e%3A_~ZLƳy9Ω%ghؙg\jM9eRnUh !;C;kvV^DF4}4$ӤK;X•YC^y8]U(c+-%]ZC;tp̚=3 Qrҕ.1B3 t(J: ]!\#BWφĎ]/Nb?>]<~p#8tJ2]T т[t(9)i+!v-m+D){:Jk Q㿗 &ƣH^)4<7cpm1[,2G·O>/GFz!rҖ:L[AW|u.l2|A_V5qG /?'Ju9 R PlElr9{|[Wq(|*3( !BϞX:g=v?ª~}͗'2''ؗ\@l饷efnb^' ۥ*8OhYxZ{mBdw쫥rs_ gUi]%#ֺB?g%}g!NQKI∥RdL8Nfu.β;5" T aRih%1Vr&B& WVY!k:cb#\bb#ZzQ-7mob[q!B;+גoEtutJ`"PYAǓO %Jݿ[j!M#mTpMWhZbV4\ixhZiM]!`c;CWhVytKF0;tp ]!ZzBm Е1[!Btgl1 Z-NWҞ]K;rB3NLjVӶ4CWrǮ:5#/.#F *@?;xr+hJ%o~;Yx_G Ggp4hgU[dGL{wyVڄ3 fVMl)@=͊4}h^?S>ɝOԃ|0H!W›G In)NpR%E'rTIKCZշˣU?@w1 A]ir1# k^Z@QZ\zWʞr˔&XW ; տ+>9-ϩceL_/k8`h55vl>v<{R|JE~z5 cj{6걏*~}>q/{eCWv߮b0t.+th9k=]JAtOWGHW mH :CWWѮնt(-[mu[4 ]\neW zRɞrlSl@DalөJu$Dp!FZCY7 6#խE`>,y sCCYS=-EJg9+9}ֹVיhPڇfQ9J] N ] U㟉o/,P*Owg/薙y0eQ~A|~yf;  ͑POǿNo΢wK HWmY۫gr|O z2iNrlIs$c &KUgr93*7QqJV~\yگlv4c{7xe/fou/絽鞨(=pW9&kt8.[J lԊڱIO/~GSf ͨ't!bJU֧.np`%^Uʗ^cGajFZR/ ܍+o]Uco~XGxF;t*dՂ6'FY)o-A(V1J&ׂFͨw2R9EgfѢk_CRR֖q[EK?*MTHk^yü 9YrX\3 >O͹xޜ@cVNTRZϩTRU&՝H9iE%xu})Cad;z7E׆4DNE[RHHHI?7 ҋJsh#JK}\.`5 ja}ɈX8pj6z.WP< bJ1U)PAV]|F ԍMɀ1!KutAgIQH!Q!S#׆R3uiLZ%|./V́ ]d0!- OY`bm>)h-a\KΙ680aD9qnB!VˮH&1fC3cMm̭Lt%ρD$A`(.t-c64ˆbwY55i0o\Ƴg*둔ٰ֞OcPѺ J(ڑkj o2 8Kcx4t[2 `,dc!]AА,8p$S<$X(BŹ(Q7JS㧒2 THPcفd2 L 氊޴X =2UFnTzCZ2ȸALAAXg @tHBB("2m ؙ\mѪPFtQ>K](xt0wT \|})q$8TR 3ˁ Ukq*y ֑L_ J 7PP;U )ti7 U BA{WTD"=dH_(hT qNՕHޫ0K]D5}PQR@}j4F32-Un yCH Tf%Eࠄ2`d"b C$uWH:мGwU+czh2& ԙyr9/'t;^̈KUU+f#cEͫ2z#m~ s?` ˲_ʵ˚tq:\ܫn\-ޚ^ L} h"X 3 o b9PT8xigБuJ`"gIW=$B+uT2: !'y֣⃂E 1ʃVs I"922.e(5.΃e1:,%$0AɣN1֔H#38hGҧUS Y_"bΪڃ d-Qةy1t{/׫~Y?wy4P詌 uKе 2"hwPSv d A{نIʣjd Ԁʬ![6R;q뭪U7},ed促 * @T_j&Y+qcVHmm`τv^ygk.W,|V69׍$Y T  Bztqtj 4zR6 >PpJHu6tk*zkM!%8OՓFCoׄ bz~3?62#c0pPaRDluHck"* F m}*-ho:)Jjd*fQ2uYڂ4C1VXYov+P!>n;"h$G\k'F4$\sc~E bX0j);Zƈi87F1 w01|\zH` tB)Ѧi.1Q`2j hNm;c fnV<HY 6Rg(HqϤyJ!&ci@Zq!; tmr4߳aנB]ǒ?j7l5^ A8ڢ* :j\àe6Mf@ 2q_HMO34%0hGtSLEi8 8؆kCW$kЭvA< \T*΃6\ *wKC1ˡ옕Is)8I4O](̡:H儱+(mU~B:wW "<!G_2? R GV~7כVio) Sbi!lCYgϟlDl>w[ެ/>??Y3K/\*Kg\ҙ*#IR-S@ynC,mgLˮNyuz9ӗibb\\+.s!t3v9eFUއr53Q?=Gw 9)H3sFܗ\T\lˮG:O|ӵ`7֫΋eqCpsE}) wzvXlFy]q~ZҷĉF7U=_wbbOOv)kܻi~]}y*` )I F?)`uS9{tpL>-<]n=GZpvsnխ՗WvZŸ?fq{jL_]uC_kmɲE6~vݳ@$;YRDى3(mziq>X:Q;(m1ߙ"/>umJ&U<~?~k-*ͧXt+Y[h:U4ԺWH+@JW lϧ\%BWƫʄ^!]Iɳq*]% eZ5 ztBMp/Kp9j ]%*zt1´Mt\BWJҩCWrϡ@V禫3/xZMCWd5t%;:v1\g7; C{ŬC)tGWU,ek*U-th9Mˎ^!]Q-+h ]%5*etPr++S`-XUE[ *`0ĢHWRqY3L20K9`(Ie ㊷`54V"tN(I*_#M j#h{Tei[*JNxGW$Ō= .i ]%T4J=bxt(cH೿|t'X]%*]k+Mbm*Hu{\ZJhOW %ˡ+Ы Jg3/sZCW {"+ձC._m&۱dO ?KŦ Q]nۑ%C:J[D +j"tM(+wh).FeWi_-VL<`?9}i-eU-Q)u=:fqjO೅Uc`7#ܤOŖ/~ 1 X3,$blqM?-[*jbcrӤ<>ޮvA쏴uӏPK\} k0y#>ԊO1lF׏#!XD9Φ/Ia1h.m}P@_flMHTԳ\ p!xIDn=lBKM rxTR6ܦһ5Ù#M2n.'nݰ S  \k))yaC _6(01,-gwLg"{^cSġ᳕`MA`.դJũ1I?w_ o˫Uv0 # ˮt:t)*ʩpk vEVOD*5|ڵ3 m1U^nn(MeCM≠.`ARѠB8?sc.pOpB;&<6]}eI,k.-w0 켧24Ahn18F{V018*! JB!',-Ȉ&*HΑC*5Ymp¯~7+ u1CK sֆ3D}@VLTGҘYňQ0llRu.P ܼ_<}f @;.Ch#74(|*&k=fRB-uR6.Nyُ${y  ulgqTuhGY=י+@NZCxy'ENP q1:o@I,;⨅80H3M1ج{>ߓaPX΢ToE>W2WQXHO>Ӵ/Ar`dtqxRY ΉW`V2CmeD,_7 tDu>VZd2 qm JZ i TlFzAq{dzB^wטǛClyHpuL;evqС*ͽ7{z f(M͂e pF|[K0|Ɇ>Ey)!j hh0Os*Lq_KF/JU4;|W[ a.p}Hz0PK$0/qnüwcFCaWl*Jg RagJ$EDUlR$MF"f5XE_JHfC1҅ Wu%)7,Vq$\@KeV8e*8S*&dG =5 D4V,GUaY~;v5ǼE"$=8Ø;ql%`|tBʒd8c&7aT^=c'{$ԥϔTӁrf㫭!"g4`)$T@f ߂~{2?0(\|]n#hBk Ӟ)c^:*ooܳMwqqNx"D*BjPD(RF#ˢs1E:iHZ =2 Jhs)$(-M4Ja8RsR68p,)&rfْ;Rg@>|9tI΋77 /`[-zL<ɗ}[T5&hw\eNgX j"(-s^9I2>ꤜʍ"VFudXDcZ)"V"D4-j[g&)#_FڥG;`Z)f77d9A=rEDj,I>>yʸ``m¿Q#.L;c2ƭɴ2*C!%N=M-QkYJ?m-lP-4(qD"%RSMt\.Q)Aow=.f¥9{n4O_m_Z Ԁ~?PEu?RJ*Cz Y@S U5 S1ӶʿmBlۊ wSTnZN A?d  ¸7Ҕ f.zgM.Xν?ose<6޲tCmu0Q SUn/cuQ+%hV\ Ҿ*m1n;NɶSZgRZ'(_cPVi@ҋ#I`kT+j $*"EF.O]oGWe! l9gXm jkd4mUg(e [L_WU{= *\H`SRpB%KM"GZؠzKn,9Khv=v fŽFW=zi^g>zXL3j~~a%L~]͞J~[#w=g}~o0'yFzpqNnT؁LU$'i!OD$B9m|/$B29!7KB2Ҁpq'FGCRJ[TlBXNBT\!3&-Q66pJ&fw@!rARe9mՉieyNr2κ1J!ۑ{?q>9^8fg6IP$ ȡ"ѠOIHSy8AmS; 2%͟"8hqh%7FPeZ:ިugGhpU\3:02WEpUtt2l WoZ&ꚲ1)nB8nuʷ[:S㚆Rx1)X2x3kZDʘeS@lqhcJImFBdzDp*[KTYwvFb!'cpXQ,s-qL/+SC5e=6nyFa93]p+bssȯF)Cx -4\C,%ۭ7< ,qa+JDl e;blb@)@"jv՝OqI:6Em-jsm*,@c( uRKtLE!q-hN Dm$vIJRRtk"02Qd$j@ުc<;a%720L:ɮcSDd#"GĕZ,P(Iv&hIٗ DALq4GK(qAСSD"(?rjҔGF5L@>1]TC g:)#.EvJlWQ<2X鸋 iu4*QPjʖ4z:nw \<:v7̾[X\Wy^:#o>k2G\7~$cm[~T^1ExC5qFRc#9zQtj 7s}{_]^ySYQ jJZs$a!iaPޚk!)iz3Ǯ/ ˺Vuf޵>fo>ׯ;+VN{2"SUZߛrB<^{m,]`jt1/zqyK2ЯͪB,Y`F~ZovZ楒!ky:Řס C.mϺ[*]I`uٕQ+ fMu󗶤ru;S?^InlHw-O>v")WFD~:@R.`AHYj\GiԿQI]./?i#q[4[&@M6 .R.uEHLF56J"Th" >6 T.(6o K]γL5Vs*OħP.ryXxS0/߾Sv,*Pl's!TD2zzƈI֫hI/="l2-dC b %'DjGA-$*)E̅dU $t{w !~g]C|8^C<9\[Xv}z{na?Y!Ħ4H<J#h,"Ɇ(`P4O0!c<tv{iy:j&%MLg@U3",܄8b4 dv݅Q+8zp?2r<]UIZ#A]!! 0isT W`M_mˬE˞]qmSԵd7w1YK]xuU+q}2&)8)r)DpeNH bghT&ncrk ~A6k|b͍( kʵcEX͖.bQdR1 Wk\k3yN7+:=Ru^= IJ\AR&ƚ띮l$[BYM~6P<~=L_`OiT'k_-=WLBk)WLLlP&8\.ݱ˖GtFT{4.%->aʰL\#>/9ԞOY5*L;ݪa]h>zΦz暓3 ۂ^._nj!r(sV:礧!q DZc*2s4FށƁQkh:QeQix](ZӀoxv;%.&m`)i- X'vF7+NMAK.'N@/S`HGE J:Mɞ-_X}Jه>~$Oj+ Op 8RJ7~i98 p7d %Cw4 ŋp }QVS ,5*fQ׸1>YuV|{.{SNF N"q #)i^8]D8qJF8`Ӈ:E6PkAy>̳A~X]*ۣd_iF!_4x_$j)` s,Nk0"gم,(l* Cg! ZNI=={[,JhH9pV%"q.ZYn,VY&9cTBb /Et99\'HyF_u[~R,&MĀZݎQ{ąJ]d8V:''G{-]VK| {O"HŅDdL&iT"HMzϒqOu4d4և 8RODvUrFfR`+dzϑ7&{ !QuJkx\-K7*:`b+h^gRu峬 NYi ~!Ipls8[~JEy^›w~n`W= ;kn1N (ε@w+d-LwI/S rdH+){,ďlQk~E *s@H;$VLhvx6 0Sgw=qٻi~ɕc=Dl?4U(tp?;/?*$D1mZKmCc|O#j)uqy}zxF1 ..=kcq2cvCglљi8N|n%%`<vA||o1# ikI֖fX{3ִ6L,o '>MyGDmODVA6V/Z- 4{DirZ)k$ 2l=:miZ+*UHW6_pz7W/URf_xyx g`.{w{u{tZ?Mۮk4y'|vUCni1Hn%ȭ6}z>?ːzZ)I](3ÕzJd;PWq9JS$Vbػ|sCTlGUgND:qVK͡mUHc|BS&PS.)2Pd!knĄS4Q%'ɧ Imh0wIИ`Ir6:ͽi`PA0MsDEf<.XT]1x#b:\씇@g4z`ʠ |Uos(6JxK͢44:m(<. D*3ƒqWB #Astf7lm}j#۫ `#>2AJ.,A^6:)gV$Ngֳ!(818);Ǎ4&A 6*b"AdTS\w'u֝5_]*Wi2>pe}~K_?t' ]#z8?{V]Ldb~y;/ =qlt:YoQmɱ|(b_sd9Uvt#tJ:Z%}m|AI˳I}]zq&҅Js;HnzIeG9Hh\)N".ڠJZ ^QذC^۔6 o޼7?@0 uKfA>N]ϝbNom^io?z@vcڵͽsHifᓝ7Ndcy-=hZ) %W*}>]bn]HLyt+ pG ̬)7ԟjF?ȫ E(#h `H9SE8$Nd-ԆB4P.A# e8B 6+!ET!`VI+(;']%Z+ RHMCJ-PjSt. ) HPy'Ȓ("\PF3qYx|eL! z^ŇMĕ-nVg;)߂7䎜O1&^XIqDPr2Ey B!gR `]W QFB.4UÝtfu3T S6URc-2{T33 1UUu[S{$c/g}]xec)}$ aw/]lS/,6Y}yܖŶXQKf> c. 8o]n|B/I))X[=_#_A'S!5qR3 qm00ڀG ,0In5u*ñ'`n.'`N4-0gc.`Gl)-~3|rw#ax/O~8<,!~g1,(k^0NC\u$OyU`{cY\]=v3]4hNG쉹k7檊{*nvsuʢR;7Ņnk*}bRutuH\J⪽)Rv2xf Lb ! i躾rC \g>zVXeh vl^&.ݚIi7eRr/0WU}+xΠV檊+q_UUҨn\)k+28KāRH\Ne6 wHj%6͈BRUH3S%$w(`IY#/Q茵QcTh\ ¦Y ?0Jl`"R,9KDmi<1ee.0%9tc{3qn3Kۨ05:O|秉;K!\>e_d1,g<ω@%`0E af )@S0zJ1SFޫ3(TAL Y)k-"` ̩$K3;1PaJ(Em ,|ci&Kd򄇫/g+˘Exȹ*j6`I#&lKV .f&s%pvei,CkM0[TsVJT$ vVI+gʀFHr]D+?&YfNBtU/|+ؽĠ+RCE[Y ⡺NjIqZ޷o2å{ae{;v#;xoJuJ\oLfƖ!K҃9  FrA]o7lpez[Ipeeu jZ bLɫX ' "8e _-5-?icB_J*_vy)!6G RWW+q^Նa:Iaɗ9* z4y$L%uS|#8'&XD [@&N+iag$1zg"k]sip?Kel_CczL$ZxaȘ hN ΄,LR1>!(UGB8ŔW'cT=r25z6ѫ@& *G=FSq- eHID(9F^IX ii7 %}(RW*kY3C ;rlSR/!lsN( Es!I(:PԊק'u'-60&#&SI(Ql8r/LPJL(ڲm&*_K0ٚ[0k 4~VT(u[bj0kFM&KKAH4F%j l#²}C5}5kDVU>XnVTsѸ%,6$=a !ǒHvh`l_ƢVȪak9D'觫F.=ar0;aLDM@IH&J6A GCX`Tqތ }*3}!pZl |ӻve(. ba'ūjAysFHrA$kt0 :tKCRĦ+2,P2kOƠs )0jDMiu%‚Uf(KteoR302d*8Iƴ8Oꛮ'qUFpyYBZ_(+B{/դ/be$LH4$_S> ]Sƣ)&$ JisPMK"$x_'h\RWc-?tKAx&)eݩ%L6]v"8e䵒xĚTjYtױX/^"ڪGA=zJhzcS9ʻ_~[h h2 $pR(˿egRoʽ4ιJ~˭Ӯ}eg$dEf1)Ě3oBEy'.1`btmg$xA5U7>qkf,rwmwRF,0NLQ[kA[YsJ|,PH,cЯ6Qz{ tdv5 `A:hc'G.؅&ovo(7on\CN.<LҧgE kE" 5FuXf5vkF}Ż䠅GD^Eg ΐ k>R)(Zɷ u1,D1&b+Zbѥ?F% Q,*_ʢPZf< [1!k^ң>\ʼx+Unwy#[Ak A7< v#Ѷ132p>Pq*_[dzdLI~&Ax!eHR6˺pb.JF!DD DPցSIKAekzh]9Q3WޅJTѦPM/ܱ8-f&@.4޹]\>˞\=/;;81> WJmM&8g*LJTEʀ+%*C,)N(xIy3hOٙȔҕe(쥵JYJ+?Ƥ+v +:Ԅ6TO'3'6e)O0J5_$jJ+b0h6ރ 5rKb\m'֗WA>29wiq ]io з}?X$Q @V+- J3Tqڋv/k0E}-mp>GwLNƇ|T}>gG|^2?1ۣwMXA~٣>_>>[i푙/.? 3QQaN\IaDwmm#3CyH"b}%)' O5E,l9qd]ݬ*~UW(hNDkh&ĕyAePQBƁHP֜!D!ehbLwZ,`ʆj15i"Cʬuf#gMtRCj(#=}f=>5^%: , pSPbazAGPǼu8CRv@.(|8}]`6$SR[)r7j6rH|V.l#SٝdZIۼd<%I^{|'utpsd;:o*Yjx"*3+Q*: NcV2 :*%t8 7T)nP0LR\-&x#32xEqf"ܒ19kdFͦd8cCYHQfYzYUY8(mz456nur ş5(p8<T\b[¨qFZ4hnGN4`"D2 `ʨ.2@ !{D%$ItY$R:SBDzEjm#`bJmWj7R`^hs$pAaYY6te d8 ˸.oȅ4*XȠbVA9AQN2 .#ȁQ@,ZF}' Ɲ)'?6$D$D%R/&`P$ؤѤr&y$''~G)udtY%V 6bKP7TN`Ir€d#5xbU6k|dS.\2'ye9wj<#VkQ"bjjJPAR vwQrqrqWa6ya9-y.i8ȕ')40Yj\~|$RR'TVP9WDDOzsV]ޜԲ,4Պo- *Gz%2J:T2'}iA){`#KhcX]VXmhPQJGEɔYbT2Qأs(^»ҮLvs̮9:ȥUc1ˉIvSUd?IڝvZкj]Z/sTtfOd8WZв}~˖FϗWw+kr9?{wӼ:<l:_q5ZxzI]9ko[ԕWQ<ֻǕ({ƟF y^~F9`c] qI #=8j[QK]/?e#q[[& U¨;13@G%]\+`N $*辚6 ԇ.(6 TjKˋ]ΪLk퓏NgWkyxS$}roEp"AOfI@T`U@so A*j+Fia+!rƽ`#f#X2ܮt(X䅤3AH5!LhI8eCx "#xgߋNKN+-0l#/"Q#a(fDae} 1k 7Bi̕"yn0zsCp?xދ:&U4 B q!•5&%|w}]X ffZtuCRVu'(e7 l!>I_!nn4\+ UG$% .#Ω&Պzc BR &QٞrkG2xޛOwMgZ:L/޼x"&xӐӋ)D߷Mއ0rگx~1E~ Z;?Kjҥ?p0:?]bGc{??Ќ~y'~ ?5jfWeޚI,H-NgS5M{گg¥h7<uJ}-Dg%4?>d\͍" 3-H*F49Ӡ X-݃?k5dt-fm||x{Q5, 0%ؚU],|焀:X(KP ^6#-jtM~SݟGbX Ή'VWjli&}bbmЎΫP 5Ja'|c zu] infֳ66" 9sER;bd,O߽sY-u$.a CK J쎤f1bg}[3zw2mDKw{gF6㩎͒Y@4E΂eNOiHHfH+f"ɘY"Ƒr&>DL1jveZf8z|\Y3o.Mw.Glseni(X,t8%2}%Iҭ{'\ќá3r$:Ȗ# q4I`5dM ֮Q /:\(ĵSK;"E &9  fEA@yla`#Ay`6 wǚq(r&F7Z=-= BCWtDBYME`&D(_]\u4WںSѰBt1F#%y!RrЁ Ac%~ϑi_VEy sOR YxZ"7@VZΨ@f!|}F&L%k>j51.tPnˆcgɳ~ [~+Eu?h? sPsXI~ Eǹ jPPń?烡y?(b 2|JJ>*g{]T& "aKA)nTU8Ëj t/ ukWÀ`YlB%¦ Rr%sr՞Q0PR,>$\{?bbj!R۳Z OO'JD*SN=V+ *vqjI`US; 9q|]Qinŕ95YZ_kջٝӓEbdN̩2dT_sGoϓ/01Fs$ok=V,2aHѸO7]r`drQ Z>j;ɮQ*Urvn`_g.uX$p1F t*'5l0.`r7GG^ѿ{#L/_?oP$Aݛw"{pc__:0^;4Ul)9fWrǸ7Brژ[o J?oϿ<*;GzryjEovvrεI'-jUyUH찝 %W>׵+9?8ix`,(U2xi23ŃnA*m~#V+ ކg&qNF˦ XFZ3y܈`$f8UO C!fGjd>9HysQ cͬ'e#g.%}9%amNCȆCK`ϧ֝ Kñ׎I;b25j< rT٥rVT܂B(rԍ12dp0J!]Znu '1beFKKE/*ݹhP'cmR;(Q)딶])f`k.PHØ8P=tHJHݒFߩPr5yaIɆ"A8`ISh̔ɇAWW \>uGHף=>ƟGDnM&- GiL9U3 $Mvc}Ki:L+=js^Xe,ۢ ߫z`U+B>Δ9cT2J1- ־tb"Ѻ^NpGɮmb[ ۈ@N"6ȼ$ ~ G M!2Ak!7PGCJ90U*c=N;[B7zf9A 3{NZ.iYsJOٻgjb,ڃ{몉{BI\v-]3Sf^_}$}Uc8j/(-7?, X4lVFl<ۃ_eΐrp2frwn'k̮Gjk誟_}}p+ͤR +6Í9t UwJ(n ҕB&V5\BW @uUW=it]i8-] t;\}B$'QV'NNW նt4}I5R+!DWVsQw""Zzt|Wgҧhlbގ,gOcC7@^TK2aؘp4dh\?_ι`1X>_⳽ɑu1i/(y2j75lPTʼcb0T؂!K%ހkgOK ̧w&'3__}cgmI 9Zr;R&=TM_&n/nC)K۔3t̉!gL\`s49Lث9Yn(-K`RGSJmΌ6I|`%cbr>[ur> UDe =DB)۳x4-/68{xqeH/ULYZ כc̉nM`ych:)4rZwN(ni ҴLi Z5.I^Uӡ+ѣ4ANp+iS*UOI(5k ҕ"7*fKJpUcbv \wJ(q"]iBf X6LpUcA>XUBiKW[6= ±8}nթnp<-]턶[vCIjt[zh/\"!#PtRJh;]%u[գ UCW Z ]Q +Is*)t_]%zt60!1tRJhO&ʦI-]=!R3^l10!úñPJMכ3F9kM`*\NhYJN1fP6fJqn])B++dcⵟU)UWO4g6L%m ]\В; %-]il5H tS >PmفHKWmz̵ԼAt5\BW tO JO WUBU{lKWCWVgYjݥj*jEꥑv c9_0![~D/J;5 iAZYKYMTN+mm1L8\)u1 gzjLIfUG$BI1*… T+%I-g{H 4-.P 4e!wf>(/\Ho(~Z_WLvϽ4Swxg{A*|Q֢(>E2`| $DhJi$cKd @/>3i؊>aCR}e~:(N @Rb&y$8FҎY | \KLR5lL'"K~(kh*Φ M &%(Πuɪpzcu`~齼|5.O쪁]| !x^,k&A +]n"b Ow ͲD3Ag$ʣŠ#@0 NS-bXlN'x jXks0C[&So pxa:RI@\&dNF03,*  ="APzX p p?a(upr0`vֵκ0Q 6BUTqqfY$mɡkx(s\Obwmm:J[Ҍʀns?tЁ A1ȁ;R?4 { **sm SFH:͠7+-gT K҇K(`+/dZ^bUָOɛO:4.g/[f?Χ d0rfp>L{1څ߽]P-EGbzۿCK~ Yfn aI;"1?YS C9adP$XEy$"aHu>JA)c&paם:[_  >gqk ˥CR pe݇ᴌ=wgt?$R\Q4ۋo,us t0鸭"jv$j8>q0QnZWgٕȄʼnYUFsh ?RzH6 %~<[A`m*v$o/bS1ds1Zm,j,o0o04ŋ_wTN %n-K]TrS*IpqXT0a4.BIF1KZ7&*&~-3׿<s чϾ|/>%870 Fjo=|S~ڡh}ET-mr> U3w!@~}y9'nFx. jW b?@ T # _P"d#A1Vdpϯz(pej^BV;u5D#1}$".RC ܂H1 DԢhk}6,(//zϿp4sIΫN`]^Uo;?|s:ހCS$s4-+fXF)ǵf8HmI Q0`AuR1  Z{1[,AuܯnR?poW/bT^?PZaX/w5۠z]a:Y ߎn6YxQ ƹEmr*L&y!6o.ɡ{VH1LX=d *i'JG%mN.?\Bݜ(_ʌI?=fbC/Fz-^a6ҘM'\p{3˶ui8<IؿnVZ-q9F3lM&7U N,|PPj<&s嘔R``}ۍҤ+TxVUx[;jrpWn^NYE8@̙Finֵb8(znrWQ[z)lUĴzDzUpX6bLjFWaܔZVJ7bh3)ئio{Z1r#׾'H&MNl7ir┦MցyP)0rPz'Ta^Qka5 C.t 18T%N*Js/90r/A:Imԑ2Tͧ2D2T2T=2@`ѴX*cbq& 8a Zc!t IJK!);TaR EcD2Y+냉(2""&ZH0<)c"8-6q9_4!|.27'k}Iy_(n>Pf4`CgVDqy$:OG6w# }!pGHۃvb,Yydc6Xfh D@b΂wDðq`pK5_7=WuΜzGES~\螩yb Ť]iOR')}uoi}u逭>7ZZOH>֜U/ѕNq67nzhA;fPf;ƮoS/K6>?{;ZYK0<\a^/K}ipnDh f o3Žogs-6N۩)ٱṫgo}Ƕ 1Y>Z 8N)EKn訨2&aZ`jՂs9F53UH*(QPTy?[cRQ9̉l% l̊T{3ʢCP7*W*K$dJg`}&eۗ ȢNZhN-vD{E0J`$ O1G ƌ9/٠!FAKdLҙ875k?Y ED(S x 4/N(Bs 撨zB|u6Tw%!(LAZ;Q 0QI2Θ"eAeD" W+jui6`cLAV)tKgus462@ŎވujR!8K~oam|gY%QL%aU$e,W2QcEB,J杰BHGqj{YObe |ٛ"_rxc,5Dn[Whn;jU|h2m}\ϨXP֥KoFˠxiGt!d!pJp =k!tKYE,3˭? ƽڙ!m!!Fi2 fiyΠ71FQ0H,B 'E.8zAWd%(SI )Y0'Ɏ3qig+qnVVE}QVCy\s ?2Aʃ4 >[ tio>o5K% %MFH(g0#blJ0 aJGK! P3' <#Y"GaB9*(0d-XNHXB㣷eeɷp;8N"-W,fu}[̺_+Jg_,&)PyaR:*X^r@Kv4X[w3"$jŌ^/יTƷ[ydlD\Σ`&'W<#ǬFL!Ǡ<3-t/љ:t֡$jQր~󹎨I4&g?PHM@l@&1<.MfwfeAvi`)Z )'CHJ' IQ_! 2i\G.y9Z  d&R.B:6ˎgYi7!Cu^8Y)0=NFΠ1EN'Е8WMt< } ۗ}KR3$)#TDD.--g-]稝$ iXhvN^ RqU2Jђ+EG0 09BJüS('Vְ lw4lk BtLH 8d aKtgDuWP]ucJl䜧RHrR O<$^̡DHyg5XllذIoA[eɠa: |})k%50"hX{)6 KqH6|;+@iw3ƨV'LƔ T"I:U!'!IRLnY>JxA)mD;R)oug9r;k}.ī?%U[I9MDBQyJeu"E\>g@Hxmۧ`DGڻiȁ&WQ;^{|;7FoM8-+5,$ E$X@zSm5JP'؅'ӑop*ӣѡ; ;OI_HnQ\BWM彻I'OKF2"$YJS _Y3gKx''kzi˧@SD#Oй}6=O]#o]6{5j njo6XdWt9k'3pE)pD+U!R˓ ;oc^pPty;xDy&&Zhv6Y,%pI TJ9ƃ:!ԣPOwċ25y $NmtwP˻i#FHDg:\(T )i.Pdx,%K!5CRtxi\A3xOlp2hBBg dc{YR+#0քÂa\ss|VML,ZKZ $ŘW\B$mOCT?$4 awf{H΍1^t J_- 5,BvpŀF/UZ~n,7 NShJJD3%c3B:`:c Mӹ4i\;ųa\)7Z[_ϖhVweՒFWM_ٸ#mctO|=+W-?O6zZɯ/gҔC?MƇov33L(9hSJH߶l}d4/\$u8qtAq[Zi}_nr*1|ݦ'˦IW=9&*@UtjgKԋt6+k퉪.+\jhd'L?/\T h;ri_Ĩݜ=~=;m@ ڍo[)jjR\fYGv4\y;xZ"Nj08zU78Ԣ }xMjmXpi{!Ī' z Mbm_v}-HEyt N|zqf_HmᆓI4EdSDOͯl, z;<+w(n[` /%(g/0~9 19WJE6:{(t#y1CQ]'1Ps\~7ξ$ȹGS' ڨK?lqBwh{,+r#ԃH^"QKUwZ_;5,|Ffћm#&{j[]݃WtFmP1B^ƺ4D@af3;֛c_>%.5*%C"sN#lK.َzt={i.^{D y`Gp%ƑgU>\4 34AzM#Hm7r9Mg(;ˣ>U.)]z{&ՒڕZgUu/Hp??WUd߷PulJaн}Uۖ*- mkIzvsiU/z UΑV,˘Rrr:@AzmRyIfIyʒ!8zjt2µ½c ONA.qJ'y\0ɸs`Z1XM̐sO0V&/ X%3Ftx >i=R5Ҽ>@Sh%/Lz5׿jkԃ&f)|Q߷Nf LQ>k5΄Xª:'ꔽCuJQ X" pF(¼t:Qf J,D#mƤ޵u#"iHdV4`fn"blđIvnY՗HlmsuجH Ɔ$dT$,ZY ԚSU;ggGcbQ0qdS@=[AǏ?:LJg-Hd'MC+JI}wlLZjآ9M`rdUZ.:9SbxV%ICTơrJ`E&nE_.'2ehZT_9a!'HBNή/\dJV&UDCFaK{Cp?:Ccr\(fP+dK9!C u79j87Ę[b~Y &C-3TyJR іq8-82[ 8ƒ$4]xCN<՛rj/X˗|qu[lPh"7RR.H1D? 1KOj:+Ck9=@|I_ clJ}Mkj >ϱi$}1J-y,V{긯նcjjv/^`(h)rT*O/2@]ɣWzeIR[3,Q(@ܳLQR-a춇_Lux,qEf8[ĭ^, JJkt-I9iVvjfZ +[u9Qzhm$Xd]M 7īE`QBUd}.~E&nsѯ.^As*]lx_C(9>`?V#56HYSs%E!]<]<{8lu=0a;*-6M{Nogq :.(h=OqxSԜ\XwyÓO{gj4]=ٔu)w㲊OqyY]U}/Z{7gR</OOd*}g8ӧ{>\>};Dh]iP1(Y!^JL&Ev H8q-(!>` s GE'l6WWg)Á|#Gh!L ,.'ҟcTv9w"Ȏ5rLL1CNP1@icts&%g[Kx_Bpҟ =.G$mz ?8rNȭ-;tzw}_[CM>oz|z`u o͏nޏF~ o=}=\oo}ww5պcz1p%Vh֝AZ_?}KO]oHx=~f''rByĔss*lWmyIO>p?εA--r!ꌝ hڹg~=F*׃]j1W V=2h'2câsxH>4/Y*\U =f!G Bb5$j&ٓ_2O;3.Vi ]3>/uA/٫xd{آ I;EcMU)'?V /=X[64F%[s&K:̹R#]FyJ,-m٩x sKM:uVC(df{Yi4]Xwj.)ة0VT9Nj |7*\כ,/h@e /x=~88q{fv}"% '%޹ѝo`j#6}N.mSztWM^ovp݋_M_] {1VuIy;LIU/&ɢcSNXaM{bc_>]NYoP/&գ6?..'fZY{)vlڽ{'?,/MlZO?]qJ\Z=Y\lM nWP'7_* Q|GU2 xb7T*l.Ey݋M+aK7M.1`ʹ&ζ%FٞIRNJ%AzS0vTM_jGktk #鑛 f?}sr뀄96E]$ v_|8@2Mɫ`iJE$JCR76$QHCR6T?j\D5 9r5dTzxdњXnMp 'ee \%Ô];UDlTlc ]d8{4~J$/_c2r_{jXכ5d|_”ǶjȀ\+z]Ic)k3}%z AfM9MU1;L GO 0zсspPr)l`S? ݄Pw+E \,6*&a`%%R 7A*Pm"a֫cMu!gLZQ8f!l]+b{ [[R M8ycGcs~_ށo3hv~"wǻRyu<(w@q.-5'ɡεpw'Q^~GN{yW &f%X!GDʆ}+ I~S&\,l dkJNٹG1&g9h-&n`6#c|ѳΜ^wTSguSAkw}o] ~}zmWϔor>Q%z$nh&+ iV!ʻBo_*YD@6~dBΝ(v7[o>S8i⺯)$,<Qu#% ' &i7Ǣ@1Ϭ!&=IjңL$(bL XI"瀁m}XأNIi!pj)mlECS,یs?b2F >B_@_W2/襺7%=@だzܽ Aá[mloHz,"(>x=~(p!T]- z6阋H73qNj'Φx OOo?~ڜfk]pDLe?Y4 ъHj]H;ј0!cc Dc}2%m=sڔ 0Z?^η%]?= SUy❣/;\닳[7~TPLk-@V^hh$W/j,Q5U^P`k6]_fp c<M 1bM9G!Tsb%.++i33|L pw"|AmaVG$һ<Ωgkxɭ-U^?M)>"־'%1PĻTg3_^?hzfA&U5hg=GBO9_km _v|? 4Mݢb? >5jA{8Y~$T uJ|**1:"X#Dwi3'& eDzW_5o *=uC)Z貆V? Z6]`:9]׮}7..V.uD~?|]Xԟߓ+knQYɚHPv?K-^ϧɂQ$ì04/X ojFAFۅ=5|b{Rz CI`ЪWSƇh.&Idf nS*di剧.,ƾ6!(j/^u\|?/_Wbٱԩ+>?ptNT4ߨ[~⃩:BN3bmԵ2.PE[^'7'>5\i߸n@h% 73z^V;{dNE= 2Etv0;?Mab9A/faO9?CR8=J|55$D,RG\J^3>ֈIk&uv|1ͯ9̸6kƧ*!N9QFq jK/9Z8<,$5dzpJ2$DK~%:3P!EJzBZb\iTk*!\@$y- e&#QHY{%S띱$ D佖qά]HKDf%99[ \+ptC]=MVqN QyPT8p+= ,fW+88损 -,0vGE;80I$*Ű[)r'j6r6Op6YjGa4b!Ovb!7./>QҺ⬾]Z9hxrsO]SmWuc}4T$auFhJ|JcNs՘hŭcBJ #Hi mE( ,*E3$.APNq飶ěU`̙ rKllJ6Y3vej{]_ X77kO糿rq?MҌQ5%%gHr3WA:eҀl@pdQG]d$F-*Ky3 t j-$M* %^Fґ0хTOfsYbį"qǮRm}l )  DPY:te) b2n2g 4d@h kꐚ$GQp|(UfyV;$E"f]%", v PJ}d:; :i4ёM:'4 c@Aӣ:2BQ+ hC2ЍAXG!1F34a04(8$Ke,? -Ū]sP{dW2E:A.xWs۝qk/"dٌ^[:l.A.C.<"np"lC@&\Fn ,?cƌ6n0~cm =[? +^Aa%I_ZiJ:q|[aFhV"JI(5YbT2Q896sW?-tLص]f7TݮW_&N0 mR^UG@w޿.ve9.clow4oEw8ծ[5_gyz~Mkn8S?^knnH9y7L,ڇBRGKdXB#25"vzNUp؝hTn<ճc@,rqGO1$DhJi@^"K$HXG"\J J-&y$ ViǬQFYJy>%,gK>FΖRv1vV%#W’RG[II{@cw{;}wkn!OJFZh,XJ( +|2R#G.NScmM`z2ʌ`RR :\(|7Vg|f6R")9IE$s2JaQiZI k3H ! Lltuw> {&?qYnx@b`Y!*:8 |=p$05<(s\ˉS-ML$.&_7*2QA>$"c%~֑ih !/\n!e Yx[W›@C +-gT F?]i5c{īՅq᫓p;m>N{_'1z ;>^>Un%Oʁ 85sRuP8buD(F?7EY:L|}T=*bG,*ӆ "a::RLݑpnǗGQ벽VÀ`YlB)[ oCRr%Kr%ȝAa2Wcv9?LMl`>ԻSeqaj!R_۳Z /W/#$TՑ۪4y-j=R-6 -`x=fW0*0֧tO[kgEKsU}bbAP,97&q<:{׌-Wc9RL: `C%I{[b!h*)dZ˧iF<hsV ZjɾV*UUڼ|;TuJH>Efw'UlBT*'T64 7x7G/_|e|?߿yuSL鋯_~ṼKzIP&@w/~G:W^4Ul)viWrOBZ[O J?.x>ߌ*.;#\Jz =d4?o"5µUT}{*Dv!MR%uR#ٛz&1Ľt$W˔rX$)D9A0 6nRk|$LP@ "QKjQ :ҧlD^YϞ; 14F{CSMS$d&&VQRƌƁZ0cĜFp[қId&{q_@~(ֿ&]{fNh(y.J~vV+ qFP\-#evޙ~1531Fe)/#d`Kg)VQvsqΣs鑃˥G(= 3<"& ڥtQh1W) 1JRK#!cҼeq"SYwWU0 Y\noqI>]EuWZ,i%[;~CR,R4Ɔ if8z=v9f%[BVVɒZ@Sn89xgryLw}7|"^:}7d6h.']:Y zk0 f«τW awI>_NdUvp8ϴs[o[RW@g. ,2bSu,MɫA @ yEw<:SN z R癌n*\?%s{;4fh8s8P.ys]c8CUޟzV㩰ǻqSO?|8j- (8*6ዞ2P־ETTY.ns*g`90=$MGI"s8t~Mڙ9[efܵˆ~?~븃_G]_xQ|󓋙}-_pݏ?g~b9"/K{s~{ +RO/ZM\oxA bqzy?ګytwvL' V5/ 91Ӣ-} Q(G eB8P؃j9Y!e\v6O KXK!AC`! e0[U[јP5"b_5Dd 2#gDeOc iWV~f_t0̲_= 3#T d2*bP[uu#ӠLgfJUT!jA0GI;[8d5e#~&B]9 q@j-/]tOrM3n1bYe)Aʛ6Kin\mjCՇ]Mr[Ͽ89,lډ/X:%D62Y;TB'Lr/X #cQ]. ޙu={G@T- l^4f}b9ZCcXTXb,mlM&{N1x$<{cj&n<.@;0d`T!Qg=[ /Y1g 4*M/0[ F H\5qv_U6U$qe6|[mύD..u6zq0jr1F1WLjW#qF7⪑ݾ+RaQ$^AAW\E\5jю]\5*c WVe+!X7⪑܅y֎lT:7(`/338on(wK> z5n {kg}ٿoO?.eYcy9ϸNNǞV| [VןrIL QŽӍ\"Ə]L7*$_vƇ~ *y~2gt~Ѷ >lo\O꧈+DyQg؄s.[g;K]r]#~Yt}\_u&,GSG󙳽5Mqvw?륦>FB}e߿9 *V)+HN)#9ʵF/'tP) '.iZ$ܼ%b65O'ϧgW0 pr_4^onu&0?{@r+ۣoi ]dYl/6(([?W*ِPlcU+ `"J=o(Տ^ͩ-RV켟amAkk9xwtv&*2y孢T9]mw$?v pnS{Ӿ|T۷g@\Pۄ_~mR>E[O,Vx^z=O_r<@W k"KE$)!4Ux ¸{l5p^[؎e^$Sc%h"((!lT 5R["NTԋ+EE]Ή?paZ_;욍:" _s'lDbPȥgut`٦Mg(; DCZsrpՄ/Qk#i*2!ŚN3`` Sլh`9x%-ې~kJ--n!Ws֭)J;슩B;BXAG,{7֝f1Gw,[*;d:ReG-][ʎڊ[nDP}gݞZk&XH[х"fDV+  JAi"<(T>eAjEY8kl6LƸYPm rTl x%MGre+ p(9eg 鎏73 o7Db|N gإh}BȢjt^順s $6*d W)TUj!32bv9~`2H昘eL̢Șj*)ͅhVb,DdiFĚ T[TmTYv,bz<ptr-%RNMг5œ\wP 'S8;Jz5[aaVjnk;@6Πk[$D2jj4Vgб53oA9з=391un⦠x כ3ckW4Ζ戝SNm%}UŮRz[Z [m$kr{18"ph1GbP2KfOmbMWb;-*BPD鐜M|0ʿWf2;6G=WbZ*ՒlVlB @S!Cײz9?LFWe74Q 0*aTJJ-%&Y6jZAx&;iO"["sH>dժ[ U (%?-5S EIp "8.ʇgc f 'zo%_:_JK`R BֱuELVeHF{Qvcj&myDkC]E_"2y+G5I}2l@ZDJJ=ّB{ W`˧gm %:A?^4ŚU::Y}#/\!=NEDK@Y$$L&ϓlwSv=ojRRwʨ —.:K'SJ MI1#JI1SJʚvx!#Vͮr0X.PXDPIg[jkRṵTb-hU0ϑhBL!*βo6 n'09|Bt`<ִ}k]6X,%L,&.غhe'Q˥0W9yL2Nbb0ld*uJ|*Z{ 譊hRQX2b6˗v"tB]XXtLYcu( ! +nD5Bauݽ 4.*櫱V^t(Xk]imZjFD]tRtf!vTUl{ |w8&ws?@Cܟ^f>pg4*NF^ά=VW+vM.Ju:T;+l P終 ?xꍾ":M]Va'/} )g.1{[μ#(Gc*ٚs-`/RєԇgfZfPނ#` xvp@%)>ȹx 1u 6r3ݰkbLǕpn}/~Խ/gtOT"L: Ws X**4h)E('w_ ?M/YS|=I Ԩ|Cf%1wLc웫OGgX֥wĽIPpds ΒJ]YbX5ᩬVMx2FA"*R䬡U( }˧d3c@Hff(HUu.bVjePY'1WH)?>(SFs?0R }s>ҝ]<+P=!/d'5=1d3ACM>l2t=/tesy, ]lǓ^ϹUa{-BO=[ѯzl hK)X^)rM.b#:{mPKzbgƄq; tmzݝu:w[\yz_y˳;dUkusU%-"4Q\fA% kLj2IH׿r /G0X*"HWdȁݎ.dUbA[<Է`TWC$x8II̚@m. ANi[}^vwf\C_4Qѽ-1O7g4iU={3yҭ.-0%m”Qh" 1ki6 gkN4O2׍$?]R9|2LJ \8X*0, }HOn(j(nUUduZui2Sh^{x4Xx]t].0J?h4(M$EˆwE_0.̤0b 8Sn5ʪ۩zaOb*b@6oϳ28;7㼪2>D3]v:L*3G>nƌ[u`+3P"GW!|84Mͪ~ǒ?->?qtNTT}Ҕ !uLX72Ih+t{3v[ÕV_uf@oeA`Sub b-k!Z?`aV [ U \[464՗:(s[ "WydzNj-͂d4+qevG s'ZsI/'s&`+R7X+4{U5h#aY0طy?+C/S~GPd,l҈TkehmVokD7m Iل?|f&)ոIjE ɒZ߀闋:}2~Fѹ-YG$#.xpEK,sU^r-2)XiځC.m%ջ Wņ:^O?ֆ[u-NA1lj> 1)۲bU*.\-iٵUܛwQ_MDtߤt~„8\́#ˣ= q)͍1NHC LD'VFx :~3rC)v!AWxv;o0FGHZc8QψS(VN;)i㰡D%tS; 4mFڅ;v}qQJA!)%dkx`oN0}WЎYpz|kEFj:ѢAS`w1Gx,X{%ҽdٷ6Jb=~m 3`k U cJ#68R$#Y@S~TkXe[ ]?.hrsrߟb?t4T\a1riL(>ds(9kAͅm꫹pt9lw,g-KE1KN޸s ㎍6cc 5ۋz$KK1ͻG^m9rXQk).@֣HQI;[v5,#u3F2dm1 (œDq(h(Ai' ݀Ђ]лtoic|I0DKɜ>;ܫ sLo[Wр{QLL mQ޹y>G1S%w+{~فՉN9ыuk{6"(j|!Viuҝ:s#Z:%b"%JЏjDFW"A!D "6dG`B<U` 1kAIIDk1PSvYqޥ Euo?cu8#2D1KtAXCQ!K1H \ Q(cg! b <* 9N|0a!rxj{+%RcA팜ՂNqSpbWrlr"w$d8_]mm_%''9,ʕS6xI۩V^uluFhJ|JcNs՘hŭcBJ #HwڊG 7FI\PNq飶ěU`̙֌Z3vgt ; u!EBׅ[74&˸%fz4ګ-J7_`t9藓ƶQ㌴i + n2{iD6 vdQG]d$F-=P42AZBIRTH'K!#a:0- 9>芜ێ~wlqZol/ Pq"(lxʵ,M:O2Ĕ@NpP `7u5g i4ёp A{x4P(SkC2XG!1̍gӄ ,iPC,1s:ֱF쌜,T[ЋeS{]qɦzu^/2'ye9w2qأEԀהBkK16Tm]чqǮC>܂ [pT65rnq :Va9q }#E?b8E?P|'i0OxHqdXA*/_=` ~3y6ELabF-ۥo0*9GIBOJ[ KIβ9pY<(t<>^?Ozk薃DuDFgO*>ԁwww. jcDKw:RR$8".JV(9ec$;|{{u&W^fWT]WÜȥU)u (Tyd?TWc{=Ѽrl]6jgyr~FKkޮ~ _+x֕p [_uYŚM+ ~7 [LL(;so%>V"}F\UKBRG{`'0RZ)^M?c7o~,׻nDKJWsF6㩖= av,cpYE>sO<8J+%Zx,Xx»u+¥bBL1jveZrf8z|'蝑k=Ӧ;]#9tzdhl^g~S7ZL:%#hk5),:*׉ZʣŠ# )T'4t ^ 'xM֮9X^:=U<ጏd4D*#R"%1dNF03,* B+ama@iBfK#0ĕ&J}w"|}%֌5Cƅo9n:CÏpu+䢼c?a05sT]qffC(}[ϱ: *e2t|T=2V. D0 tp*S (0н]S  8RF¥CRr%3r[ߝ}iί9-x6)mݛS0'q!\&+\"O۳淺'z#$TN[4kH`Uf]?Ni ŵ>եxZ_կ՛W~b08snzV-])Ir7A>c$ikKy[3ZYdw!}~LVpp t9oޟ,FUtն ;oL0a4viQ,8U7ϭJA/TO?^qPG'/_|e~w'߽8y98%L¯"p_|uWMC{MT5^N.{K|v3w%7 ?ҥ˿WU|Zf=#X |1?be*2U?ܠJbݹ*DtQH}^">Qw/?nMG"[}$X"H!1\iLj- >FB"HԒZmt{so64*翻М86&Ah74Ď0 HAea¼5OI(2;N'g:;z3.$ޅDtsMOvv>P:7:v0a;֒6 =l^׵q~:]HcR=٠L ƙARL7ntF!G[$m=Z K>s4[V ̰)ǵf8HmI (q0ݦZ&G>S!ޅ$ @/-a2W~my]Gglyuq[Gd ߚuD98ӣ?{WOI垵ڪ}&JU^r_nVK-X`089Huq(bT@%UKqPpW<>4{mpK6.3=*/ZrU;LLli޷m{}Ckҭ]?5bӭs/|mo~@z-te}|}9=`jvUt)'Fl8Ϙٵ43jifA2㪲>sk^}L5T[{^Eɻ唊Tt J_.JYcJEtщ1k,W8ؒBO|tЈMM%K vyN=Λeyl~q:kI&ZH٣u秹}xjThvCp,0_^&.3w6]ؿ)M L()#.D&$o<ÃCrg# Yl}=Ew42zkt@Tp'_W?swY8qeg#6 4f)-b9O;4Ԏb8ؠ]lgy`FS'TrNC.lt%CP[m]c`! e0[U[јP5bb_5Dd cٰR,QNN:a>Qar~'_{g_|02_=2#T d2*bP[uu#戩`نRkEZЇ'd;qޒvp6PjY_e!սenڱ)kxGb_ְ+38gY-w}!U.urk_RE'7E/DZsO֤jtr{U'OY8>l: P-X&%1WmfKevXܽ/YO/Jwb"Uo,5,ۆh[s4Ut`> 1%c=P #xoFHI5TSagMb&~Ky3hMb0qWqyKEЩ=FŽ4qao".MZs4)-˫ ^O+z ~ascjǤ]Iu{VscRZ?/se\azm}\5-썹j>+z}jz3wsդ 04W`a7{cvoU֫&e\Esqqxt٤l1tdۤr'yQˎY'e?Ӄ |(ӟ!<۾d}e ~k[OZp50m]S2 JQ!zj\ǿQ͓Spb8@q5ț?حћ1{b7qI;Ipߠm 33WM\5ql`F~Iƛ4W(Np%o*G9˹Ӳd-22_L`ٙ/J,oHg;n ~Gk'.+47M`?`_tt2aߠvJɇ6차\r yrp!V.u]/g4q.0o'KSr,9 FkvTOrsVyK.Wxw˻j3e [B<Uu$A4>3J)$X8ApI+GDѨD],hqhQϓjYNtR, ve{ź9OLMnnylOz<`Wʃ]y+v7P/i^ UPa `Wʃ]y+v<ؕ`Wʃ]y+ߵZtSttЕAW:J]+tt?\Ɲe4'gtqrM佔l~D3YRiJ`RU'YÍyJRܑmnͨ'?Y]%Je96q';Pf"b }]^vB׀qNd0TXUR!jzk]us|AA; MQ]+4H4l^H>t.3a0kGsyßo5uC`"]- t"8@TLt%#o`P kģɺDSDIRubiDs.7`n7RGS-B4Xcdr2@ ,VHSsZG+;'ӳ/K!Nr# 8oxyF7ߟ1 \ zf_qot`Mg(; Q©| y2V(zy ;_:7g.%\RiQ'[@kKYͯ(wo);ok+voy^u} Skb [ڤI{K9TČ"T֪`%ZCQ)(\ĚHS6A$VllgM˜mԆW< AQWmgM ]kk:#临+[y+MCNXe5:X<j"F#jbMSFJqy(.EVui(3)c LJn (*DM;l=[#xp21g]fZj$.Hr8=Sm 6>g |}d>[O=kv?1=\3,Jό "ŝBo4Sh'[ O#N_F iov+[%_*Kk` cq@&7rv{$(, $!K,VCp{$Bʬ.=5|$~ZTm5C3Cr6Yh+_m7q^͆]!Y_HIYqezp 0-5|mQ0GQ]FSD )âk6`}&"[qvn 48ƳYrl3iXDPKH׊.k:0`Ƞl|T%JDn89$-9CuhV,%Etֳnlg U%+jgH*E%m(J,ӟU (^&لeMCz2z;?LF.o=DRVE4`W)eX2RnlWf|pݭnBb߹E JYP$:aT*P:ցRt*мZZ(u"T`ui-il=FV̕c[BgZRڡc3ұQ_Dն*݋ڽpQYc[O/g~O:kN*h8%4y O%:Oi㞫E{1Z0;l%f}˙wPUUU5ZT_ )E5(v{MOq[@vIP?Dc+,rHW8h(o]נ,fာ)X=ft |y 7by0hwOcPE&l<@yBOX`FwiݝX'u1pV.:w r 6EŘ`mrn P)U PR8YmgJI-Q UʺEx|a|ّz;ˆ=?zzrÖN݀ӯ7\ΓQ[nkz_S .G.\7k+UB?{WܶU }qUq퇬K5Ę"zCHZ $Kpf0uLkVP!&)BNWL]L:^wVh]BSi{XQE!kWZвlun-~7wfoJ 4u7tj{r&U4 Bq! O~>d35V-:xv[#␔MzR۲l?LRO 6z8|wxdшX$`"DkpR vDFcNY a||۹ӭ|x|V)'x8%W('aRAULkWj:fX WqjR'I]d\ʵ{ }f6m\kh4D`q_jorbju2o2-z}&)ktYa1 Ay1\NnMIsMMI_W1r5[ *h^LLɯ.'t 1W < ؙF/Fm)%ɌHO?e\j)4j/TjO9[˳w=;g0eR4ڿǣPk;%ᝳTcᕘiqK sNW]3b//e 73M>8M+^)?O'-IƟ~c& +~_\Çf]5%۾"_VR,}]k32= kr#QU#C\?i7_q\ӷibo`?x@["[m?w&ŷl4V{Sz5 Szw)&k'ۭ3ǴIp9\h6ڨyHmEע3h㷏i%YՐIєL꡹2gţi'ʊ&cG߱uf5Et^,c.DL]K LD'VF.$  Qe:,O!J>?OLe9*"zQJVqLI3!" ty ?SW{|qT:!t >m= F|ղW1GŤ="0z̰p `Πu"!SNpmi#%GQ 5FZ痩u҃:8s:`%b"%Jy,܉h \14E *B5DeP هгcq?G_8"CDG0` yj.f1RAQ(cd8! b <* s~]`DʩR`YȞm:5MjRП8ltˉ<URU y|O*?aVTuluFhJ|JcNsiLJ1!VGVЅт* RfGm7 ,3r#clF|J6,F;b!EPXE@t4vMU7 J3j#%gHf@_qqK'!K$_SFu{ԡhd.%7`Qk % 6N0+D{CJGt`ZD"rR>5͈m#bb jg]QE{ Ĕ]q"(lxJ,M:O2d8xhX&K1yY1 4Cv4H:? D9#`\06E!Fu*3ffʸJǵZcSVSrj-P}a683MFU'0@ 挭[)8!eeJȅX>>*ld|<)ZU_s;k%yar|ϲ'[RGK/0ʥHk xM}z0#v;o\Nsvn4὿c(EDBeQ䓁=HHfH+@@ W'm:[:U/=oE \ "M\1lv̊.6voJxZnhf t]n3&% .[ΑRFjȅTKiS5 ^ 'xM n*_uPJnw64e^|:JOX.:NÜ87#o}(m;e~ VGbԼ/\`ػ^F>u6o|i0EIA)TUÛj xqlWÀ,65>E|qTct ',@RE0kJKu Mqij[ԯNNǫJD*m =@R-m8Op;[Զ.$~moM W&#'\QΛz+#h ULR|<Z9L 帵1;p [3g^%㛹;D}@΅R҉-?V mCU.Yά$+.| !KG'TbT Ydnk^m$p ĔrXr)`аCڸZK#aH E]o#}͆956<٠Q4ju`GF$ ቲ¼`H5UBpLʻd-W+Dt>hpᝇNnUx 2uoS+[uJϦ14<0Km*CA ƙARL\O)p'~R'˟P%@`9e.+fXFqryS((g`k.PHØ8P=x:FYY2,%C3{ݒ_}DхےZSbI֝:[Z'u]25s1x{wpEڋAnؗдmk>0twLe5{^MONm6W+EM}ّԤrb*&ɫqx ϫ #nyz<}]~|姶WSG[ݺefFOo?0w-nD̢W_GF#:ff9aHr<@Џ`t9 bH ׆cjS/W'ƽ2]= 5KWOCui(]Jz[rjhj GCWmCRBWotDtw6]  ]9trFBWo;' kXjBWoث]W24k:x] ] QztN}Kg](we'?mlڸ= j;anxeD8yɹ~5k ; &o?%Xd M,B@u7H> #@|t5PBWoB hjp,t5g7PG7u.kZw8JZ6pE17`Khj1<] ~ǡ+ȩL$8W'eJt4ԕ];6JsDt5?fih:] ztGDWpУWZst5PZ^-ҕ x}zmлU:۳ >&mwAnmzq~qu}Zn[l։x\x}}^3N.\~i;zd,pgq5:k]]tyK3v]x˟Ė5f2uAx]b$/޽A~} |{7&j8_;w~V_fK4L&NEnu-Y~2*W־vjۚ9u-MT!ǥrzV6{zKUW^l4,&6L){͚5ÔLtŦHp_Vih[2bE""}p{DGf #Jkv]l\11fhoBj6O: C/ ti3k96)qql}u}q{~t=4.Yc3ڢ]>rzi#>9Pq_腾6 -C7c.j7G@_j{53BfzhG_ɪ>54Fu"Cy|1,0붑S}^yh|A׹/30}Mv_?F֖Kh؛j[]>+{2<{>&MQĻ~Ӯ쉋/ć 7M]c~rw=br=`&!Wg XQgV(M׳>?FjChS!2qZsuV=Sy`ll+pbyawVj %M![V 91M居Es .*fr- HQn2nWzNJI%DJ@.ۃ Zd[4;cFOk xa!\Et͹c0+1c̥aR=^4d8R ߙՠQ"/ЈQ~&iO?g/SJ{65#3XRssf- сOB(R^ϛXUhUSI 3 $[1ۦFӽu99cK :־8qN>,jct"vM8ŦRRLXHHI?aʧ Ť) k). \jH) AT}Ɉ-8f"i6'$X'-͚ BM )ڂ1dg̈́=u U5X)3Aqcsa36NJ%Cez+UHdl.!L! #ͫiR=೭ [CdܠaS`d 1s%  Ed3HgLy>]o$$XYuavLq b`.Lơ N u& 4X[y(y3XGf"C@?J ޛk,,@Q"Ł.̑& :l[Hl ΎBr! d|F*I"D5{VQR@}JirYCBb|9s,R-Q8P eyE d÷4r/p&ZgM2 \t;w:+A ǘ@QSrvTUX* {0q8`oǫvi:Iė7^.ZP/&!Zhǀ1<*4&(TY]I9H|1*2:D?J4X 0_<+XX茸aZ R|$ L&rZed^W`q̎[CL`9`q&! _v 5DnG-C@8o.̪d!T?3 Du' m0|Y+ 1H!r?8;׻3LeELs2XvobM=!%DC!ek /f90HT&]9vL'p뙺tH Hq1 jYB2mѐ-Chg*ɘ+y y@BH 5CN!ڢʬQ`roSG3 <3 7e#kq3Sl:LMlM1FiE$C VAWZup.*b̶d`368lA ؉ZȽ4kYGنI&529 j3+ioft譚  RoV !Fp_(Y+`Hj2R@Kyl<|runl{7iN/-L`0M8Jނ=*MF`ۿONdAŬ㪥aXSqYkj]/ f@הS#"{*]T7nM^-pg¨ui1` EYXƻ; Zf6'=per㹐ᇽ߇U9}o: ksHu0_mco>bVRl7ح.Jx rm.lg?^\^һbOn2nx8Yᄈ.3cO>^|5W㝮Ϯvy?w9Y"FOٺ _µ*>׸f׶@I#4~*k׎6 ~ޮV} 4M%в|%H@@o R T-@K%R T-@K%R T-@K%R T-@K%R T-@K%R T-@K%R T-@K%R T-@K%R T-@K%R T-@K%R T-@K%R TJ ,h*xӃJ'K%[JHJhZ*JhZ*JhZ*Jg{ܶ4U{[@{vm&((xS3i/LŚqbtZ4h$Yz(?R|Hr@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r dpV]NN # r5 jTzIN!: DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@1hrr`ў@ יl@84z“hN /T"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r zZoɏ3jnק̈́OO@r=~T/g'@d\8'Nr@Q7.yl1ɸ4ҍWHkZIy3@jvt-cMQ.# VkҴ$B0鱖c&Gez֗BMoɱV8s^)Q`z qfVisƘ(W\pjUJ WĕŹ<#\gprWVW5Bp|:(WglVFr}p5@\y3 ,\\x.BcWRr+gSƜ$c㪛\ud\uSqpMR,YϹ`g+3U"\Z+Tٷ gT&-DFlpr4Jp͵;@p5@\Ih\NAlW q a#dq%c\+߃]sl6B^+P٧]<WXqh=G܍kq9w9%ͿsPLg(L7+\0j;Qb}NQx6B+Tlq*%c mNBEJ Pm?1J1H\9ɴ |:(e3 7+/qp(U&\ZzEt3)^O$X>ÌqMU7{2]Rlu#\=6Vpg PT:C Rp JsU vR%\ W^dL0cMaoM3(Y<^1 *\zqդZmUKEu(r y}Z- =E Y!lXٱVBƚŪ*Vi}G1*O6M@b]5 KJi_wfk@ >dRm`aV!ᚯG_&͚۪Y/VO/_X׬R >ĺP 5mS>"ga.;**y*rp<s^KBzZvZKyrYI9x U1_E-p%JrS8+ʲM3tQT[JA0V{ Y-w\|>`Z+T9jr_*#Rp8/w:PO֒At|gWiڪcTCߙi2#\`#ʵ<\Z̨kqe5n@h1ʕLA4!}Ncv ؈|P܋<WJO \D6 u2\ZO Rp)ۃ`'N0M=Nj=;J޳:}\ycz0`|6BZ+T%j3!3 >DNr9&BRWRiq%GW(cW: @d] WsBVf+\.˾ U K"oԳnX`qe=slӭ61;m=|b@21;Tm1* az6[2v<\\ P}tp5@\Y)B}b@^+P+xq*!\ WY#rCf+km.B8@ea^1f32/(W\pjp'/W2oc(\ulYњLf3|,$S8uHfFXA&L\-r4Fisw8\`-5*\Z+TIQ q:p8O'Y+T{$G+e{&Wq]>B>O#VA WȜƮ@etQ *w\JC\=2n&X#;g;GU7GZFJۯe8+Nzls`t{e:BJ++Xk(xepV17T];^FJSXևޜl'㴐囓_~M~Lh ?L}7nU3_}ǿ~xC蛜frgK͹'nZ_6KjIBNNnɦ2,)>%Ieĝ%_]qj8#.GMi9PQ mO5Wy2T a̱^/& '+,K7d[޿+gT9ewp/qHw˯NGhմNjvuݨlkvrf]s|5zz~Pgk(˜tE.GڴʞmEkMtFBlprWV~( UjO ]36Re+T.BFW*qn3߶ZlXʱ֧z{(wyO:A|@iTRT9DLɅ ?CPB+P+;P \ WVJ/rM>cv(ך\pj=;4}mr|B* 3*#\ W^:SqrG& u J \=^2%<`w`7htShp%Wz:#\`5L+T{-;!J(W^9(Pv l+TLq*$\=be `=Higlc9hӵNyqNEVښ+ǭ:el?Im>H3m6[X_ %Xw0g5dW'wyp[P?Iab>ENju? F{EXU9͛f$6?Jբ(S]"*%#+ϯw92a9>NޞCr[Lo>aqv簿;<6%XV: ' S<"!y]=r]Iգ &~)k77J*;6bRggE&t6G]G+]myRq[2R83Q%fe w"'z>(4Ƨ"yuQ\SV@<BqjbVZQ Zn֗*rVǔc,SF}XL+Ge~XkZe^)|:>b[+(C5VmΰmQVk׏o~z9A[~}`󣿭)JZzfvY{ }n6dRAJ#d:N_RKٞpѪ}^ݭnn~_]aK%,SFB•emK^k铬x.$g\t JR%0SVR;!%.RY5%K+U\)N&&P@ ϳ\W&rGB5?5EhQYu`Ĺ u pF1蘿Ұ+(( 5Mxxbл"43UbA[WVաܦRNXs5beAmCH1:rJ@RzTTbɂdFyps4t&뤼&$+xB%M)TZkEuGu WJF~ꋮk|z)7”S~ʯj{rqo68麓{>۽:d7j x 7z=Z8 h^UERUNqx<5<bl5Y}8 Ze»lo1}7:_.I.)[% ΖxtپTIxf~>4\/#?m@?*L'&ը=&~֋0WcNJ@75Sn=Z\-~<``c Bâ"3aYիiz]Ĵ8utpgh9´Ý{韞6n=i()?m-Fʹy'D4Nj\TuUI$)ʪ2%fAU4UBHyď^ }VB\i) 1ڒk VuP_mFB#>@PC+ \G@7oz13"Vu]*B2W3 >\G.8Qşw.\V{O U AK/"w>F%ȇ02KZisQ(Qh9R*]TKʘB-#(T\x#)8ٻ8$W?,̼~9]`vl.kRT7ݤ,ϟj؇ZIHI;U}q0 E8G,r3u WzoaEޛM֯7cl}Iw{_w.*);2Qp{+TeMjȖ*%+< Rb*mC%QHmfJK"J*R6; lT% hJ|/xXˮϞ}}AC7#;Nѓ=bb,T aOLk}*ezXZ.LWXG[U?Hj؋YQ3B7DS]ЊNIi )չ|=dxNHu:ykL3^ĐCQS;Cf} )l Lm7o22fas9U#FhH.p )Gתҋ[6 s%IXЇQ3p."eb @2˯Xh]!FBz⡽ RH\Մ}i蘩P(I]RXXB3IB atH~i1k m1[:tcjU-֔{s̲#̲K+)o(XSU2:לBιAg)*lBnq{6R ʌj\bb1؍II:Ԥ8WcC eVX$ 4"7bbrGi]?m#(A6A&h0۳u2dɢ2. SrY,? /jgk +̈hSCD] 0 6HQ˺]!A`,B"WbʫQ‰Qh*6J DZ@ԍ8GYoW?3ujcQ%HkBID*sn(<@ i0) %!d-Mਏӏ2h^ '25*Ba 'B'lRBi iA-& jē&DNg@V )`AX% )2qX3y0y'[GDZX>F.̿23`{ |6Ƣ$}HFdtamLD툰lWKj-4~ѫ_39LcƭaE's81ۘSœʵqUịy܀./.ߍ:iȩa[7AVuapqf'!]? ' HM)$8c`RRjz N u3ISWc; Th*y4<m#LFE$5F3Ę>Q,9HIdQ0L=(Ԁѩ(f졻<3(RZ8\_j/?/.j֛ b>4|-cWtnPӤ4h N\A!2#YjyPrκ JR,!X$kĒJYE IŮ&ĤC)_MV1xbEpa7gv[MF >ct;7E?>R8%B0G|'RA18%L5cMɥ \T#t1 7iI1 -ū29s|i5i/ ]9+vSSBmȿP]BBV߲'vTu ek'n;|-GU.3l /R=j/K SJsg-ԽygoggW) ǭ6#ivm]oIh}mF L8U < %\2;!6Pl3JgP2$$(}F"Ut "d@E;9rgb woEiL['FoҎ݄ar_2;gs>t 7CqȞ[=>S޳H#"! ^!rYޏ{c}ZcG P!> fvtXmB6K-x5a-6qg܍.ZkI /`Yg.i@f $0¤M]?ݍ2{z/mL6f 7 jl^7TaX |Q{ -6zsL0P0lG(SXY#`DS=`%AJ!jsڞ-/].8%-Ogߧ>h8k h{f}5[M \l;.OgXntUUM}9[%bxŲduկgt]_bGlMޭy槳՞j/tm6y}ݖ?:(5\a]9 /Pof]cp 녍/}}xq㿾|1DO]_}ps.޾x?o՛ [uq'6h!^q1PǧW= -Jag:Tօo>}8h?ǝMKXbqgRF%rT<#.}ÿ2b֫/PqrσY1{" nh G7iy;*rv=xy0En";Q9GzLcLa%[mdhu_h1؜c3<|s4!DCdlFT-0WE0lƕZ+Q%_,*J@oLij\B]sPG_w)xR7b`g`LQʒ!62Yd'U Uﴍ2Z38+IWp D@jnIS#eH`\l_ k`ѩbya. $LS1LT3Ҹ.:uDd]ޱeuWiDQ*[L̽(K&IGH#EIFX0 sIE/%4 'lb"tDx7A͟n#fY7*Oݏ{ uOHet[jėa܀>Ff^6Ț!ׁ͆D!}꿯 ?b⯳/?WK^vc7F%(((*ʢDE,wP]zQ/Kp*jpi0pXp%)XGN$F^4˫[57]g^~]/ٝԱ;j~wnnǓ)~V h} IčxFczu1[b(%[א0Nǰ"dQ?4ޔ_emXSBDJι+EeUiJBRL݇{vՆGBJS{p -zNl'mMik1-#uUōႱFB+Z0wEL V:Cd3nn!\?-y;YsݞzIqSr(| GUlyy~~B\9 m^l{pO6(;`N.\ʂޛ_*tB$/-OGF c0e"[:ldu2oVW;UA֪"hIGϥJ@1ȝ "D)Q)FX5ZZߎ@ZHs5Ѳ2LfL_.x7kh ߋz{,.mdtSR/}8iԷla r(o~;yU\U:8޽CKx!aCnV+mvp*deI"Q^VԆ;NM%{!v7sw}=v,,gu>I钲.yS+S2Qx$PLiEbבTQLo@S 9WR :ү#-Ogw$CHi+"Ho*]4AoYQ,z\$&9|Gy$8f0+j8V'W?o[6TBchOxx){]DlEX̰xɏqECjS2 M hnGaAIb@r6 +KJ4F T+)j2`rx1綜ݛmQs݈UFjW.n#acZ1:01b0HC]#_&^}f"56Qsz7g 'M0HsQ}lڶ޵m8R/ǿԎ$'0{ǯ!J|eIB>\ŪbbYYLs_|Y "ܕ98ў#NAs>Tr\26>-CM%48dbiht:Vr&|Ušų5{:g[<{]wbWsoYG,6Zl 1gcq ny7eY[Bq9.T*b"a"X2ǤXeD*^g}eBa&|8w|~g0<$*̞T!%{J<ۑR %𢼹:ok;W&Y)8 R A`d|9xOF{+COX|YKi/RIպHu}vX:Ʊg꼍mym8Z(|$j9s8#;x693P6Ӎm `q<ğ\p$9/RhF5MrDkR6&ɠ2}z]R6|#,iX=N.?ڂOPW`2f5FR 2] .Fn`vs1(൮ILH\00 SkanLɊQ)bJʳ'qkp kv`a&}gGCLiƟVd*T5haGcTuZu]Oou׽ fRkMJUkt.)U$)U%U}JLRfDWCWJZBhY QrҕVs]!`C+ki.th|zDIuOWHWF r0j.FDue3DOWCWV).TFtȆt(*kOW_(+;]mv0=JW6Cxo@Wv=ՠ k+t.th:]!J!e7ڷ+\ZNW\ uBC+2#BBgCWWf#]!ZͺNW҈\ی &+U.themWRtutewJy,KX>]2,48ixtBc% Cɇ#w'˛Z:ZG͐cvjõ{FhC3il^-F*-DkdiQ6mz>VFӎ(686Bk):]!Jz:@ҖH͆fh(E"]X-t>Wl ;p JznJb@|}fpj3vO;cb=]=)D S+d.th:]!ʆá+F6#bD] -BWɮ䢧C++hft^{lۋI5"bq@CMc38 ~..qe"<)iF:!Җ:=ڦb j7['5xM x#KL #xN=<|]]O Q39DhqG2EadEHE*'t21ɜh͹ :i|4ޔ_e~HʬʀTh}T|T\+|έB{T ﲾ;`5)eѳ)m"%ܕU&V)x&ުgQ}gWfw:"pbx!Մb#][W9^>@[PjTNt3\]!Z*NW+TcKGdIiIIA3遵@Z UG֯eC\oKXP M#\riD//P Ҵj])+˳gFvj[!ҕ) K ]!\15W=] ]Ƥɋ3R|A˳+D+;OWuRrҕeJ]!`tpfѲ"J~?t%z9 L = ߬PڎIWr=]=)3Jˌ kj+-t(=]]1N`j }QVthej =] ]qεΉ6BBWVu=] ] ("B2ejuBtuteArZ6drH@i$[_ޒӁqdC\hmvHJCLFt~Ft#/-et(yo;D҂3#2+Y>6;˳Q\=] ]f4e<B\w=] ]Ya,=|AkQخl{ʝԚ]DR%OWd߹7oSfh%]mkҕڀTOWz*6#BVdCW ]!ZyBLtutŤ 5*t%=]"]YAvi4+v:T]c')f>pC5.TyJ+tFU4iysc;j>#_/9' 7?iIjfִ+:l*e4JFq*5)rр ϦYetsnߺg+$N_˫E#Lygfb_ 9Hr^K:z7%y8~ucuSn'*_[ٻ8n,W>,~Rbfb1&OYᒼ\5"{>ݭVdl+ XjWU8e'CtS{lR{]M77oW>Y_Hԃk YZ "lVs}oJ-i6o}׋y?/ξu7y/ʸ~߯~q%{=u~];Yn0| }{lNLdO$Փ :]A:ڏlu#wd<ծ cl %5]}RZálיd*T'0KTn 9;t֡,~x)Cy+5(-OpRTKVEEM%q )ýU"D̊[%R+^k8])lu{Qή@lV9]b@4SֺImE\, Q6FU"FLĔ֘ ^RMl+󰭌@Ky>b ml 5•rȢugɐ5T-l/l= Ņƅ!T< =_h,y(rԸK6LL5\RY"X{cnH櫎ЀOb }Ë='YU*VimɗꚸjI.ѩj+\lȦIN7bcRײx}նaP׻gt2tW)-<=ٖ}y tKro`sw^xЇֻG~{7'DEAuG^ /l?rXH=9ڦ /O֍|ut/ *@^mrSk :].pW ?$Eg{=?ZV^l_pOXU:ٿtLFKhv[Y] -\bud84eѿ A`:}OOBӁzIu<oKYיC,,f?8]ٽ Gf9|tVqy˧TQn١Z^y״ _<@)"N*VJc[kr)Ɋ&M V }E#t1Zs ;bY{'\IyX'.:zFi.zpaQu9jYJYjy|w)z O.G0_<%4?wgFDZwvْI>&Or*i*W9:o.z岩Wbnz*!Nb[5.ȶ5kr|Xy߼8i ]{ Bh'77kr`۞LrgtVSgWR znpqΰDN'>#Zw->tp|>_XT) "o;wSڐ s_"C"ARtZ 4kY\>Z'?ZֲƉֲ~B KH&&LՔVyQEKP̆S4ä8m 3YM 7[!3iagɊ41BLG.(Xd.bۯ "bv^7vc[pw^W3{ .Q%ɆdLb5EB6:V)V\<>V'˳umڥI)s )fI/eѹQ1Ԅu,èZQizI=I?VgmsD|*Q!RɴPJN/GOۃ^oN}sM_v<efK*)(a^* ZU䂐 bUK>(մ)]tI9UB@iFJ,)BuVDŗX3@&C nbk[)UnL36J % 3q6Y$k2cG9Rp; Ygif+J ?VL*&l55Y# t8d+Y.ϧ 7 |s#N4fh/dےo>􄄯d-gg^><{oN A^alg2C.dxD67Z빾C:4=OuyukZxֿ\q >b?`ݜuvXnn/M/k7m+{1Zǥ~o (k3[fT69kHrL鮱zplT-Q]̏*rO;e!P. #>YBH"nUEG%^轆SPRXWj6j!ɵ3 E"<}ot)I@-O)ǃžtvҸXl)I3 *r2nlhe9E*pV&Yt3!QAY4(xU]j q;ÈFqD̾Ch옰]HrJa]̈- OynE%lcPZS氡R Vy㒲qR5 Q.KIa9ޕd.V+Ca[MJzе YZXL\s!'AP`B!\/V4DZ[Ϊ6TM*cmޕA9RѪ;nlJ)&S O[|iZejcx+̙DlPP@$6“N)?"`P.WBY74 > U[c.F0V[$;-*`WClTz")"~x%(J u4z 1S ;ŁI Q'. @gޑ!2X֣9mkL65_KdE*{J4ݙR@qp0se}qP&Y"w"<7/:?U Bkv2BqF]ZvOJ "F_K܅,,/ zшD8Dt e͔2`5!A2@"ۣB6Ck?VH$RJ@´lo51 haI :h'Ō|Xlز`f<#骅 7&SFgTvm>P;5,u<5H% 1< 7`:oC{p߼ik>s{8QBP7B3jP<@BK6\CmeSU+D=,R/P#P)WMw!ka2+d> ++U.Oa%,7m1dHjաB mKɆi|(%URȨdj5 $ʓ 8dc&e̓rVβ5qaUY-ܒh8eyuy,vth>!ė+_,CMQF]5MJtEŒcCv`E@;" ռ^XiFϫ-"a8`3"*08G; ƺ4p,LCku藸 z58ntnrI?cR>@*uA~89͋MtWFxwӳynPvi;eK]@95?$3!gN)e:68Xw:F2"¼j[f7*|a3x//t Hx}!<_?ՀINޟOVu_ǎBɐy+I0 Fg d`!ʂ\*dJM[3'ǀ^Ȃ6%TXc϶BnǜBx\tҳVq6ͷRcʹ^|)#5&x1U9E 9MYmRŒTJvؐ\A !WX *_ KFᒁ|"\5$Z7g?u'c<VX({DA<],. )UH,5$ o͊sPl,gdHGlo#?+#E7QuҒ=Q$H~q sGAkQY}Qr/` :UG~)fq,P=> itkA"(z {#tA'tA'tA'tA'tA'tA'tA'tA'tA'tA'tA'tA'tA'tA'tA'tA'tA'tA'tA'tA'tA'tA'tAVpjhJ>&X8y?Y\Mb^Vfcz&%/淙ۍ)>NFE" ^C[!r`VWEj*DԦZԯ6lLniЖ{vzGs+ո{{oIniIob v}sUNGgv91p.i#a~V=6;{W!mQ] siD Idbv}GOb}!٢9Kb6=٨/oZ, FPB#M Hla2#7}^cu+ J.5X7X2{3wäQb7_1YK CطpͮݭE/.J`09@j0jy˳A7;$χtX>H9ONݮFȝ-f` K zgŁN'7UL+N2t0n5z8=ތ;|w?~Buçx?y3xoܭl|OXѠ-rv紿!jC;~Ux"jv9<#^ g z\x@Z\/KL5?B G#Àr=n&/SHNa%Ojobtu챴߶j⛽!10Iq#j&N`cJt=E@pqʮhgh/#.jLp *2e^:mCn"DrʉFEH'oL3dեm D Ʊ84xL<+^@$8o\"<].㦦#3o|?Agv< O:V\^Z|FR GXB50t*JL͗g\Ϩ!Յ<etL&^r-Sj1Ê sl$`9gls_s<& Z Z4D(Ƣ*b ^] $ӎo6֏}owC=4h8`0OE3I/"p!K-80)L+wUf"b%/ , 6J)i.$l Ɯ\QX96rw4{O09~< 9AaedZg2kd2pY./p/P*엔,z].Y^}4q0 Kkt:[%i}=᷍okyά_a~A\Ș'y)y3K iV ux/N^WCO, a }W@ $!cT4p__|bHtZ!~ .|cǏ+ߎUCߏ6ϳAk=-GG<f]a+˄, ;{bviLэ!xO|kOo=>HHply@A Lca%[d`bJ}>W\O`ir+G!ZhЎcwl'^9!29ݽl0tl0Rr!@%'TRVFuU+J qF;/M6U)$W)踬 p5jwف[78qIYvƅG﫞1ýD >O"BDi/a "j7' ߾>bXS.o2"}C$AffNo˫Aqf[\wmm$IW}YD â7,ؙƼȌȴ˔G'HlE얍$Ŋʌ9YqΗWwrH<;[jg G;ٻsڇ/# O&s ]knpL=TnZ*MAB8kw>sl?xh~^"L,M^ҝ9riҫ<2WGjR@؊ĺc1WSSI1?RHjsJ ^)vD)hiI>R-jeCtD~k)-Y *z)#8cmMz^NjmdJTe k~yf_udxV~1B`i4$7B6$zڍtm^5a8i*F oA4[ھĦ2jb/0pf*>7,-[IE] 6PA@xra\u5m;@zʦCnR̹qz:]e*{ ߙ܇GƖT"8} nCG&M~w^߿eSl;Uۄd]!KVC̭Q=V_Ĕj7'MUlmqʵE3̶Q'3bDŶب#1ra`tkRmj%p1P+9DYE:ͤ$Q .YeLl`$3)_<05]P$H1B"l~ֺȪH2Ɛ&vjq>G;I [ȭ#cRX'܋Qydüv8mQj#IGufvAd UsXrVQ=QK•J %m ǁ]}qfu $JZ_ Eѡ2 cț<Q*\5/'5Dk:k6Pą%Dj{2"Z!&3׊a0XApbx2^v^"5Fzv12X+#r NS xf)G܀/urmݍQlK#٠1V(ғav'kAT"ϰ"|0v|_.ϑ3傷Z> #)kޅLb6#d-D&+xHu@Ky$ (}r]j kGBcQVYY}XqbQҴ!|ѬƂB7EڥU@i iBeSVyJW97zOy0|U@lֳ< /Tl:^g,X]Nu~T}g:9kH*.N.!m;ؤ0xYghoHO cp=鲼 IeI˨\C+pme{ХMMqc}P%wK }PEpx  ތ1ܼrIv\Q(p ԁ a q8Iڝ6VЖK30cm4p3hEHi.ntO 1Tי5IH%kBi\* <`P$ z('p]c,˳°waPB@tx* M=Z]!ud1]1M5 H `f\x@mJI MLGk,"1i,e!ԿYn$㋅ H{/!j aF Pc66a|< WgYz\?:X]΍Lj `!a[-Fs/ nD1Uo3MwW-bqסcX+;}Zsnp9%kі n0f 8 3ʨͺD%@&C \^A7 "K$:YUkwϨT@=@m!%L 9. U: 9)azJfYA>;llE]q("ђt[Ka <;@'!s@ɟu7' gMP81 :(%#iU/E:@9E=΋B a7p `NBw>EW34 dǚn$j{dMhpUfs ڷFoQ]C!yoAB@ )}cT98uF Ok(R5DLttiZ !WVυiDF9lv%q{ acS9Y}Jm쑫ڠri.j" 1c3 iMטG`xN]X~*\t>teIx%$DŽJ'Հ\k5.nⶋ;7 "$vI.-K",&qVm_V%лt&+-=zjXhwҰZx!y[M(\7pܻ\m/T@&\IGH;z뿶 ҎޒҎGiGfv|[5Cn)c~HqsP.}0K=(W-qYCiuDCH,e/m8&kO3}VVБv?uQzksx}};um miZ_Q!bq1,OMfn~^Y]4jCm8_|~Ofn;?vzЮqt}FZi@ՐoQfor`C#L,_ɛjF'="ƒؔe ,ӍgCO\jK %wAͤ!W*cݷ"$1= 4n)6*SJN~3֚k}q?2<ѥz&'µDW&*fB*3FP/f١>ŭ͇4_V[d\3#ɋ$e=@UQ{S`mH1*Fkydc$ K(UɄH&p1Mff@|\m ?fsp;c٫כvl,,VyPo7}2fJO^Wz;u7}'/sYX'V@_^+&norSW߷1r_/.BƩ=2_WK|iO{ wg_gj{8r_n^Y*|:%QXd)M|X45zzV Ǟfwa͛w?7+ο8g,,icŗ@d 1pN>H%J{Dy𕍟aMxYF7@_.F>%_`5{YU¨8icX[dX_rHB6ba.YNn>t} 2o,Ս㌎1h'~O>DgZ_R?x7 +# 6`pPENt)JuCŋ깰y/SSڹ)p-6UZUl߭GZ1t=t7qA` tv,hz\"XI)bM " hC65ő;H̉z|'rum~6N 0 ( 'hλܑwΓ'LqX,2485eѐ{ Llq\O2w❻欰z˿tTOlLLlL̅q[fbviWDgbv) 癘p&>z|x7\s.kýy]vbR0TP|,6o1'j1h1|#MiTd'.; Z"\cc-` KMٜRp_v{y}a_>]̶l]*a?uu};Tr.13̡UibղZjC7eT#@\ L'c85$%Tj["HG6ĹوQײjcֱ)P\k+PÚɻ> HAVn߫,"m-siBm}{[jSɄ uqOlCMuM=ԦRzPj |m`'wWwU'vWw7Ӻ;I ]IJ~Z*]]HE pkU7mqW*-8uwRZ]=Cwd9-rWH][!L]Y`z5,TrC=]}F9e½iIM> &WRd3AВ)k N~wYj7,~pf6|s t,xk<(Sﭽң{Cᠵ4_ϾϿvr"o?Y+2 1g9E?_6%┋O?ۜ^ǒ`h֬w`Z*%!+3\)əCQݕ q]uq]uiU쮞WBP+;YՊ~49dò֫+  ^9؝0}?~|E>b+4 T.>}vߦ-7ŅA]Z4b{1uY㮖B S.7xsG~cI# }oN'h!V{yI*?'\~{dWK5PmPuSX-cP6Rg#Ml}8WDk`MHfZM(a O~9z%ֆhj\QjyP *PayPe`%wđxJ9Ll"6P qo2Cҹ7NgQj.ƾb"cA\s$\CYתQ DC!%D&{*ZZ SlMuKœ"QG6Ḷ́i,&Y[ fRi2쳮&K[j&dc,zQ&82ѳ,W\*~g)}!c.8â\y]gcs?w'ҵzS2-  #%K(2ޮor=yF.T<[@7@!{7!Ai}Kb z.VVFPsl|5hLD ywk,qn^݇NQ萮/M%_CRVXЄֵLveTiŻ9$FH@LMm +!ag3 IBdR0[h;\2 _p@QL\f-KޒOIK ZcR Ⱥ0&d`R~2l@>PE89*F>glpZ{/՚v68]},EշQύ#cK566jΪń  i_:['XsxS]vzR\j!j-81*RXBŚj̓f%;[s>صPiSnMhqԬrM8FqT6FӉmf=ߌo%%M)@)=J2!BB`!Q,x簰jNu""*:b|}F6[Gb${!$ $B su5F=\ ۚ[.fc<8cC>8}*1:cq_Xs{)"=dX2,( Pd'J670 R) /׼>RJxgiv`8Hm2` Vo˲dNI%a))vB))vNIY3􆋱u( k%Xn*%{5dj`|fIHs9Qqj6-*[MhCpL+2Yob#3T 8u4qnƩgnvTsu`[h\!TRJl6C+."'\TRϥ,aNrs2[t,r&SugKI'٢H$U5M{ 6+/ [Ptrpt$X2pK TAa }7}H-[J-b-!It7д]MX1@Cj/FUW*.PZRJ h4؄ll. 6 cpq+xCyyÇ 2LӖڰ D+R V)MPG1iuiK'K }kˮ]PLtEon,즟-ee /dDrYb3V)W~gV2y'! M"Eq]ysDZ*['{^rUD|$UQX9IXƂTc\H ,Ltϯ{XFŻ좸9nͣi~ 9ji6_k;0vY4a#A2?Φ=)Oc.*AKUۉ>e Lc8c _ܟ.Bpu2ކ"%w*f.D!Cm,ooj4>WoG >@ts4JQ%wli> &AX=(4A=f\jT6au @He4[ڈxQyDtP':exI-C)1ZJr)r^PBq ְ>e&#QHYH8*zg!&ye4zkRTv`99b{C)XR}:).N QyPT8*i\"c^d(ʨ! b <* ?s~]`DʩRYHu0r6ev>͊ RvT/x* OtXF%CqlVd'Jh|aOrΒc+.OnkzS ?T*I2:#4R{C%v>pf1JEBiLJ1!VGM cEl jGm7 ,Rlg&Z2Fs*vƖe:9Ef $lfڵqO~Ӡ4xT~F3 + n2{i@D6 D2 `ʨ.26*LCR'ZK(Ib yIt;t$LхTC_u Ezm'_R1Ej;xXwwKu;ΑA gAzg9hy"0G:XFzy.s"JD2D$x+XR#C֧FJΙ%=!hOéD MR9lVІdHBb%$gh€4(8$Kn*AMKmA.e\{ejl+rA.$0,NsCgqk/"*^[:7pA.B.<lw<2D9f7-iYτY& Iso-8ZEN8h 2渍VNltX%QJb2)P*J%cA%#X!*3ߣoINvZBIZ$(`O59\x*T?C^M|nB@մO8̚T>DdKS zޟQtMh|$ӎQGE&t$"E+MU.ɫ G N(w=3HМeR79-wyXm~H랮NxMFn^wس|Mm&{5wcLnkv5]_5a64kQ ps}$KLj.f5k$/ݰ.EJ8KŤI.SV&. B:R)h~N!d:rWɅ)QB>*ZI0va&WAk$}Z%ϖq<cMh4ovQQ't{s?Sعޓ1S1̹QaE{ac$QUtk v"^nA0G  Gd 'vҡ`b7kqQ;-DZ#haRXH>ebg;RRqJ;Ŝ(Ę:` v$# 97%ɻ ,e-)Jv/UOuSv?o\ q- IT2C0B pJ6DwQg3.A􊞃Ay: SF1%b(gYK ( Ҙ+igɓq8?G>{㴑aRJصP!dYHͰۃ33I \Yc^}bz'L.M!uD(Zv.>9^B'n=~gq1hD `0F`QaT;"zDrAm\[-]޿f.ʌ8^ia\סͲ+M=\]s3K8H)]Ƀs fɵTV"9Wlf:'Y+S/Ӽ׏`GivT_vVB+JJBP^Lg .O"yp"P>)0*LX|TC3%3,pFϰ/O[w5Գv-6i"CJM(MZI{wN~S0eWU}=M{R5lN$K{&^nyJ݃3˫(jS_mY,7K3rZ^}N7)'L?ŒM]8ywriބ_4(D/*LCNJ *>ָ$FF)'whZޝ}z8yCqz}f2(>=2lٛh*"ƗhV+&xwcZs5wǴM p5ݴq=mcڨ.Tg6 kG'Yl6)csaO{a+K ~"bbŦdrY=VzS11c@Ww1x'sL2SSbeʙJ5A‹at b@Sh$)4uW02>JEL@1%b,esi'9e"m6X~z]-w{x6GPY6t:)k>~Ft~>!'}{Z49a;iUwٶI ^&Xʑv7Rꗣvc<#j:##!RZ"ɘY"k(N%hҐ΋dD*#R"%1@9N2â f!H&[ǼcNBU3&Pl+v:t!ά2Y`Lql69v96a$.RTd4Q̃'"rMT<@FiΑi+w|kUTy:U\0S9ciBVK#JxHpd d2ݹ o~[!?Y_d^  ͳo<c|END=# D~&H.mD]4^:٩\GcJ9,R +D9A0 D>&(}D%(:҇66t"}28?{ȍ" mlc`0H@L xm{#K$e{XlTTg9!:籑E B<QτE4 qXaA!f))AwFXt:ULdgmbϘ.b"Wkݱ2(J-QO 1(Dfa;JYwO?qu"uǣB 1DV.[9%02o=†,D*Gc/~.^uCd@Q/%@Ж͉6N /M2qiUt KTdh YZE*'TI`N!}AV~Z.ڰIa֝%Hi F%=>X2D# Bw:%*EH%9mSj (y"ύl"ҟ/d53/CMy o4t'ug~>!zb4zm|5OBUn~6~ln)~5l[3nm Rd'x˟yxx7z7#j&?[6WLckZx;wSM^]cΦJ`{U_fÌct. q1ro73s^DKkqe}/=Q/-z"ߕ3AT 6N'cqS, UHX abehtt/|6 RpR ZP}ddշ+c՛ $j'؜:h;`NVj)U;L Ẁ}* /W(0R \!KUV{lp\eZ:C\1Ѓ5!L,V*Td9 8kV2IY]7DxN.A-jYIp)!FQؽ Ώ\]v/_@=x滬C9X:[(@sލVafa5ozwx_;?JbN>`+;, -N&ˆ2ȡ˥GofSBC"U‚Te 6τլzjvuuƽ5Ղbs͘Ÿ(.F.بRj3gb FKIY0bpr!Rw\e \#@*ڈn."Rc8|ՔUrw`:xyYabbDW;?_h6],O 4X9ȊUW$\(F÷.UA΂U97Y.&,&=U_+@14(QRXLlU 3 ;P!UytI 洜(Z.>8ltWHA* r+ˈ(WY-*C9 yIU:U \ZhqUvp;V=\ qN2'U;‰tQI a ڷ)pNuAʂ%,WRpՂ;P r⊁$  ,M׃p w\*a⊃6=l%D9r%+WY2}UV\#(u/cJ&lDJx/Dnۦܠ%/Y0b0r !`:=*v*2>L+ˆ)iSV\^UՊ[Ysĕ&¨P0PZ \JUV+z?ŐU:K\%- WY*˕̈fWYW@ ƮPL1dѬVBqU*3[#U=#rԸj)؜8TK'NN-$jRbdw\-pOS*2 k‹UKE)j9;J!\!ՆȂpk(WY.Rpj*uuNҮ\1&hW>&eiY!y`?|]5Юf!C9C0$)1NZ6rHR`g+^1ŸY.եY-}wJA ]l8?}4[K +H1#Y-}UV)耫s(Hmܻ-L䊠V@mQnɔ֤ L`e1r*Ө;ʞm!>`m0 U 8(h1VeV2rJsYINp :r˱Z U-ԀἙʂOR-~aP6 qVb \e,W fk$JƮ\^"5\,O`7 yWRׂyF[ڷ Zpbp:ެ%nR :G\e$늡mU+X)jyRWg+.YPd\e KUWY%pu* 6\e W+İF _߁+r FR5rKIaKtˊqZ{Lg@L!ڈpbpQ P-ψf\!rBp*g!5\ړ-ioR :G\Ycժ\e)T)pu$a* ]e\eY% 3؎U.Ѓ-éqF$ĸjh쪝F^poSTA*˕\eZWYWg+``XWY(WY}UVsU'J'ժ\a=1cF]0Ո;2f1U/X4̠>skiRK;T4 }ΞkWCҰ2޷p"ush.0Zb=T#w!Xs*x`FBu:gՇ,D\.bߣx? ?O wi|b 㱲&['ǻsy+c?-:|a]nIS y29Ww;_/8 Ű>PHp?^u-P~M]n_~<~>-s({?~ͱϯ+}r}W)j?㦡-\_+6؏?jzuy` LBouĹ))Eg,QiD HSрWzK^Yco?^N]Q\ [`?זIuťsi=IR:4@T%aOcO9Virr|>Zpb ]="滳uE_-Y>/]#d݊rmO&=|.^hn,eo{7m'\рo*w|K}9mŁ@7*{wSYhT3ggo7s6_~0?9B=SrAtsYvswV׾Ɔ\o\u Ockk/9q18z|F+8[NXqECubsSlVMWB׋h/~gJBy5,M\z5, \5?a-◰d+sK\>b%K:Yjkq,S1 W^ LLI5'+'xʆc+z^a::RWUe]$ @ yV'Uνtour/Kr#FCRJ*9˓4"g#>(.5!4[SShcT21C3h G)w["RH)P [vLuluv&u6^DvP.3Ca~c? ܾE@ ɨLdhDW(X"a&J?'eM5 P*|(-G%ьKn zjN5139ìv\,ELKw_|?a^P-e}۵976+>j*YJ7zۻq\ݱ}bnz %Oo_wlzc\)i?gķwRݫ-rbXkCikI`oH1)H@tJFW !94NkF'd  6:EJaBk2v&u2vgtZƞ,cGeFQ.>[ƗKZ̳ 7/;2|_Pxi|3_|vLpvZnψtX% AD b O۩Óǐ=126^ D N T1MS]yn: /uKm-W=Xۦ~Yr/%Q4h"DMT:&.7Q EDtBZS&H*CFː =Y"EfBX ģSѶꘇyS?g3BǾDd D(e蔆$p5&o Rl$0#gڛȍWa?Hx?T:8uRuJX"J/ IIݲlq@ǯa U(xzR@GHשF ڐ |c)QHLM€dKk٬?^xbYuP{]qɮzui{I`^YΝ*j= ^DL xM)T{cCui.^/C/>;ETCSJ7!e1WN0*هhRv &Hg@:Ldڐ1lDw eAQ1Kn@*8d*a^N8 ְ('ڂ=F'E6h+Kz;|AZ1,QWLLrkwz@FCbZ$ h]@<ԺZsTntfЮ_!,e[}^ϋƛ;mR0 A;n&ʼlR o&#0:U1̹QO #  HSzB Q^#E#$^ٸW bxL`pOpc&hqQ;-DZ#h!vd =!"}*X{))P8%Xȝb΂wDju]3r6+4> 9n?yH$.8Vu|NQ:]X3M|F7EX~!]hqI9"SmX ¸ J+nx:iU= uXiADDD)1E3ĬJbBi̕D$ne3Y8Ys|"@I)kCȲ0ggȁ@"Rߙ/6I}ТEf]6 PԵg7 l"'#ԅO\74,+!z(g1 A`lyr$-},?ncr.YAtχ%4nt7?[hVkyFG9旳B)²J6 ǣ4^Ķsͨw4ڿƣ0Ox"j\,.uy-}̓3m śfi`FNo84O{&~oͥd~<2Sfe5~D cQAR->k|qqRa4*cG8ECqů[c~<| ml~to<_.ף|rkxwgjs94wCNy LÛZ{&Nl2ZhT-RSA}bbmʊMR_[+6]H:8(þ#T$@1JT tS(R1N!q Wu ^WwT~R_y :0At_:ո `dq E6.&L"x`؂a& *( S%7 L'FӉwoĈ;ҞUd7jI 2b-LSCt[` J9qVsD#^P]߷SYGlwmoV;tG{ k.w&8 ΀@}$XʃϠu (]kg؉O .@u 1y~54h+qgw uN39]3:w|Kwa>g`b.oP+'0p0|T`V}W Ĉ62_; VKe,`` cp,F*X+e}\ۉEEheQˆ{đ͐VZ)-dLx,Xx»̓E:Rb&y$,ViǬQ(A@8]"Gg2vM D`Rrkmow!mIxmy-z4`t]n{.% 4_[ÑRFjȅTKiSqh4dM%KqNJϣ)TD2'T)p4AAh%1f!ʓHp@7JAPy nìº ( 6Y!*BYe٫G3(8p-@j-T#!@$jI-6Gz͆FYzuHM=l( Mڰ#Lad#{cDYxa^xi*f!?L砪7t/z';/ÝI+8܆U}Hj7+f],v# B /lPqxE$\S.=*3>š%@`9e3,?q!\G{'10A/8r~zzϋ۔x:Z}\dDu3hYJr-r8տ3V_}S1 &}`VIh6ko[9E/7M<[d)Ⱥv>GۯƠ~s{nh^TtPEwfQY)ٻn$Wo/1y}yg7a hWihCCn8-8*)2̓?Ҧo;hwwm5~oGyWd:n߾l94:N ћ@ƌNmiћIm=915z{aR1/pW~ۦt*ȼTu^LrIʘIxp޾}p1k<8Wl8{!+bf,S)D1B%94ϱfWO?~? ,~:vˡ]O\vt\07@ߌcnM4>m~1ޫ^'}4w~<8?hvsyMڿ"i#뻟5nO;P$N12"&.ώ縎YȦ &t&O^ڕb<`5{ž"+9@VUAχw[!SHg+]]ݸdz]_ojʦpR&p[i: !!:b6;e`NSfHj;BpiM^.HZS!,׵VjQM,D[Ǩ{kIT"/_tIzYw{Z}YQ 22!z܌YAɫJ-LKlb)Nulvg5b+XQJ5ީzkbO3$L-Z’І3lp6kkSWހ1wS֣=Px=O~zy..6_z܎YB-k캣64/U mu#qBs0NXL8AT 5ih\ jn׽KmcmQ{{,N؂AB6SBj n%UJ&S}nIF@|h-lMͥ}(2AV %[l/u68{_ӳB;ݞkub[Pl*0,h(/.8P5+Ԣ9kiQY-e9\vP4* j rmw-uZ>[[%é@1Y!uQAa%m^CJ= o(lWKUwgdkmd)眜Ŧ+#Br(h+6 ;/V^>JoZà8pGǁ9r&Nk0TBWl԰N\):H[4Ug"=9![_(@q!F5xT ALɻRz;5oez8όW__/z/?r4on3 W fr^XըboN?Vx7ڌ*'##d-/V(At޽ e_=Shm?L8CT& S &S#(5yNT#kG5jQB];YgM`1@͡p-#I ңapdSb<-WH LzUMbb\1 Îѷ,$Y.<{Wg2h5HmMpu}vP7f_H. z@߼g[6Ƭ,_mZ>]%9~̷ޫ}*,[.ٕ߭=b/9~#U}_c+Ϭs;:i@]e`nQvUdZaWYRb #>vƦtzRcT렙EXS8~NAiȵ'=pܕ#$$qɍc5L#u&7u~q%ɚq DW00t:ZUG+]#]E7]5`0t:ڗOWzc` IehNW ]GN{gU;{5]mw ny;M%ЕۂJWOzؑ3 ]u UGutPZWC/ 8]u䇡wvhwuԾJ啮@:`a: ]5.OW%ttE9Hupef)OW;N>t#*.6yd05/!CbڟO/??ۧy&gl{84xN 8'瑼 Uvxni cQ\tCW--dҕqf ~}.pٌBW骣f=+5ۑ ;\&UZ:]u~=G` yÕa/3Wz3tz|>K7 ]vpu7۠cv n.̻- Wz[xUl:\Q誣Et( VCgEp jaaqtQ]%]y7W.մ\OsՖگ}I~:Y Ah}pzRb[> NN/./%۟ixCwn:cv%wItҞ磳_Frnߨy̷O?,dSS:'/SR@zE[e!=kҍ6}Ɗ}\/Y:a2~}[5'BZ^z#q%WeהCO&u*7Duu ^3:hLZ&%NF=91gM':/Q"gEFLι8a-fiъ3U: ]u£UGPꍻDW]u~~jNW% DW{CWlvItt;F'ÕaZ2t(`g=r!@ٽw`c۱w]h :q$jiuvh NW%X,CW8t.:ʕy7w:\oFwխs;v E:`o+~jh骣_z=+/,NV=<$|g<lǏ@4;犡åa:ZKRt=ifFǹbpG/̮It*t%|7ppq3OWj%WzRi jq`aQW {IW^HUL$ptm77oA1~_wE^jwmu|'BG'c0;d;?nJڤ|>7pxQPi{|{MW-pw3G6pk05.w𶟕ާr8 8iYm`҇S|m>'}Ԯb{k*P4Rf{JV=^/[*;iayB}* (薉yxBx^蛯 wo3S>2>c!.ߡ9iDZ[ Hwm]OoOZx_&mE-iz(Y5Y:8VQfGB"W(Bw. #>>~Vߦimf`nnח׵\RѴгnUɲ>3eKI!אZ6z燚}9eC1{BR**UNiO9W]*8^Us?cGajF#-wBJƷ*BEo&3YjAAI&R:7'ZZj  9Plb8ׂ54QĽR9ENfvhѵcdsl w>\CRMR֖5;߭"RTmdl&F&5z-FڞKAb2c֎И7f(:CZ䔔ջ7@x$ lneMVr%;І)FhѡtҔsPafSBȥa1,{(Ea AQX#7@xhdl3$wOm*MVTHhi/<:fDyKƜ 9X\c|AO/Ƨ\|:oNB1 yr7NTRZϩTRU΁z:)fK$ѻ);RcB0Ǒ֑5~B_k ҋJSh#JK}sU.Y`qZU6 Bb1Cfզm]"FU$X'rij¥p0V]|: )ܔ}Q\` :`֞%E]#hdGhOPw_Kԑ#*yˌBiy lf]`- OY`bm>)h-mM+`aDM2U X.:@2h 3S±&6VW]S`1uV6 ]k05L;uF5`ken0S uIiޘuȆհbUh]%`}5 eou<%I=R `; n++nW]cXY2 3:0 NsGsYB TP&׀T%꜡45~**өPBM'$I2𼊞[PB]-a@L 7CA a ȸALAAX' @4'XР,Lh=4vCEBy>uFݚ mz,k2įj*j{]`AZI|txḰK<*}tcU2&{H\!1QUF2bx蘊'8$;Z %tF\8gP4I"i Y Y0#ic偱P&>yt_ 5PbP\F#H܎m }KEUgUS Y_"bΪf lZA]֊A(NEO1t{ . O! =1Θ}c6A@F4=|NC^Z@mD|uj З9t I3wQ@R@"TePExhMf !nK)5ڙKRjhNJHCZ!B ; U3ڄ`1b#hl,j43$2e#kvq5OE֝$ fj2 ٕ xV^<(Bjw7qVWpjz,*fa!d*)@g٨BPmˉ+b|؀?X+, MGhXYICxS(m@VɎKoU5^":jUNàeM[̀| =pe㺐^?Д n FU2'ҬFquc68 8؆kCWѠ[ 1xF< \T*N6fTS. lKC1ˡ옕2!fAH284u⹓:H_P6uf\]aD;S`]pJ R GV^|sm S8%aJ-P֨~:z(kgr~~]Nkk 5{m([6 ~77co>NNR6حnJp ro.jsoon5է_֗]o.7_2Ny8ͧs\n3l]~Mc9Eh~=c[͇?g\?O__nހ oB6%ڸmf-=vRa_h0І㢜@6qK8,Nst'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q|@c5. ݬ3}'.@@Q@]8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@v$'8y1N86IhHP-N3tĦ7'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qԠ9`'Ѐh:u'PF tN MyqH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 zG{Vvs~C@Kkn~Un>\o|ՒKrKbK@OF uHK`\kH%$`kbjq)t5}t5PF/tut( ;Z-|մhٝ:] FlHW+ rju@@M=# غW}(ǎ;wˊ^Z/QۛЊ޼cׄ?@L`緛"hz~14 N٥@KizQh iq7엳f7Fr[gө@ie"^Қ+,2EWnXL<Ɠ/RBWgHWa݂jpb.1-Ko9U4ι%r.1 dBW>yu5PQW OJ/ר^Ztuڗ"zNt;dɞ: SpI-ZNJBWeZxܶ_`#`7۠IdHe=InHyxƯX'4<;R#+&At)th>tB\QҕQ| N@)! K @t4i>3SEy5Hc%5'qG:&SG!v},,Wy螋mM[=#L)+xZBtCkxR@EwǟDhkajd< 9t8 Ph&]kue]xWԱN,Q<5$sT| y-(8wE+Z+ra4xTԔ2t+:#V|IL"Ҏ^*xI`o^>k^ڹЮ%Ȯ]0YiaB룕F~һN]m{$y|Y(7XxYh Zp#1jnWwۜA5q|qI1.۽td6*XƉLl1sBDUSg32"K{A: VJ;WT*̫_{kU~Bٝ7^sjcu͘Y"yE>d4wӂKۯ8u7%iɭ.oݿ{vj5Ouh1W[ۦ⢌}6=-۪EY(0ҷ5E~KjQ7a&d;m56`X85{7ݟuQroeIݛ(AP}GDYrbPH8)r=Q>}* %jP M *r2#Wz!yg~0'oY/m8: , ]_qyz16ߔJ̷ݚ|юc?kᦶf$Cx鬂g122!tk1Yӯ@ypAHK<}at2gKv flٔZ4 Ō ~2D#LШeUwe&&cQ^G é^X0 XbCCc%g"… {VY [%M4XpcSc<;aYI{1TsE8ƪ6Dԥ%&hNL9M2jHl W є%: >aZ&x=j.aA֍Y pij@k{%p5V>dJu&\(ΙؿkZi;똗xEW\nhuu>{UWٵهf[ꠋF8ui{N{KBC5nn ͦ ^UvEW~v Wqv3N5\-Ur?HwI<^F"I w瑩̉TVr#-'{`nNC'ua9 )afRiޫLgqѩWoXS7k8um*"T/(j* ^,QaDu#j>S^t <̏bKrohgN\2ۜfLs%rEV6& pi̧爖CO"J~1C%-mMc*#ƻtn 7V|#Hvi̩m M#\.BӈVCiD)߰>FV 6o]!`CW6ƫW(%9#+!`mCW6f@k;tBv"Ց ]e' +5)th()9.a|te5I9+.kvk ~Ctez&Kqo>tj#fh*wJ)tetSX sBfhj3PryGHWL[t SCWWe:]!JƏt4o]`-]!\B:]!JctJi]!`%CW95 "JtJkj~rϼD\ͿAeI4 9kWpyc`NӈfD4UhZYFi]`ʚU"\ޘ)D+>F?FVpݤ),xs JB:]!JCtX%H0獡++]!Zy:#]Y]6 1tp<>iꛡ`+NzNڔaCF=(]mVIo֛q>]jgFV1tp5o ]!Z#Gzte!)n0Irkr-P}Rmyy1Pq<'n8ʆGyan6T)nʡ盍,R=M2ˆnPzyȆpkJh8QcCl/yt%"4M+Dk̡ӕ䑮!]I4gI"Rt9oS%$BUtmnn ilUchjҘyfD~Qֶ'y_ìӂֲ|Q嵒.M)фS,%&hـg&<# +ǔ&DhI`O 2 ݥ 2c.$ۛh脒ysFגWǺs}+^F^=m- =n=emyE^ Fad"u)a @48ʔ$I?ïYօg<ʲHLZaryE{vB}n U :䴵^V)Tr{+ DpecRʈV|Qj{t/I}r xaSfcYnRu;fpk\Je[.vz^{m6Wf=_?xbt4p4;$Z/OgY򲤅Ė,.V\խˢrE E?z,,,Bގ.lÃy%~[ *@,PLI!\KoB]jtҋGN' $d>А42[<3Oub b5j_C>5d7zY}ƴ>&@.Q:/DĠxsǤxPԚ(că<163$ˠ|`j@-ͼ%^\SkAq)&tb~)/]qTvp;yKbԟ +.`M\5~`R˥\zޤrU+[w]eV ˥ ΢wBo95FθSܓ4`Zn3bjx @=sD$I90PyMBg9xWx 2T"o3< bɳThôHu NdAX!!(.QPN <pc!P EaQYFfēJfnE;DHT\$T$z#*RQD1?ueQ$M,+gAB ~ 3Yj \ :a юY _=L1̒d}u2 T*AdjI0$Ѳq ,2TRM50)՜mj2 =c Sͣc'0N?tCKGt;x>v%P̽]'Tt1Kj_Dna$?=tL.5m))?4Jf/N!]T.7C?0"2hû$$QB9a1i(O. nS A;mY[n}[KD I|oѹ0|&YUS)5 ysMUհXa8* z{h\ut7Moi%be^kUjZ&Xc4)fyFYѡma+BuP]&(TL^7>qɋ^7۟Ξ|sF9;O|=0 )Jfg\ǓWTmU5WU͍XjySgW{M} zKG&P Ef^`܇rՄGz`+:y\A Tb;.|bo37Q5FH`LJUp!.) B@j3&o",'$K#}dCEyq=>+|_/2mT򩺾Ʒ3/|s:" dz Hʄ$ i#3y4K!NeO#bt2lKnM {KyvԲ W s`3ΰ2SjڄGʻ#7%mE5|U rr ƍnU<~%@u cHGQѤvwBAߴ7^Tʊe/ׇ`)U5dL'&vsD5O<$ǂ^e$uOnRKв!HZXt8-rKMA3.]AaŖR/۩2bi®44:d"3&8 ʼnRu<0 N|tn\سJ5ܺe!kָyҺKGc%GT*[vSE`Lh*%/Tg 3ĝ3qTHMXC8Ng"?{׶F\^TRc1;k?YUC(PԌ,Y  !@6ք4ΓuNVVfʹqV斺N %1(UFI+3|̓|>9W<'P4o&͟>ہ]~n60}Աnmב8?O7Hx0M $ D2HҦ$J[ɠa% `ti l6=+< VfSq?o|}7 YW4qv=Zƌy^nIbF4| -oo<%ƒ>F22"Wo<<((~Q[-#\*|6itcΩtTODdQQp 4w0ql;;ZJȇtJ2^ "䵊#:Rlt €b0 ,]HD1*&m+eA#s䒼"m3FM4$%DX4m+RШ. @,"4 6Ξ;S7_~}H1BxiG2nJp6'?89'F1"BJ^1A X2٢P'ґD̹vVB3֕rN!6hEЀ @YToQZ)iUHL1 ~_F5{`)owAO_tuYCuGQMKt˨]H*th]j`OoQIc O%UoAirKdwX?[^9nsXaC[awmKw5۬\T,y8ib[1Q.f ֲJPQ6ԪFc,NbHŽ{gn㋩}DM[Ԭ:6B;J'0F70N~uE[-*_HHEQ*c@!DV9Q~؋oU89IE4oa*5I1$N |RQJD_&'3AUsѽEUu _|SX| oz_v mD~:?Y@{9#iߝV)gpD_+fjLVJRj RR,M%Br҅69XV>X X )`$W,FUI#pGV5$fw_zy.%]S]1*˧R3j\ճi\ R n譔y^ر+l쭵HX/5UƬ3%ȊNIi񭹱3i*T ]JX0 pZUPDSi )Jj b)h"0B%.^~?䄚''Mg4*baXe39a!9FV:bZͨV:?uҨ;#*J4`K RUd%qĿ՝<x-ҏXO,f5-f=πʒ fe \=ET>c,=D#Z:k[cvۧEjE~ˡ >|y+{(br ,>$=-zr:Bɢ#<㊌ui6}(hf!6F}AO:&<{otz8tC;Z%Ș#HR+XlBݶ(ՂIxUt) CT:$E's#fIOZ ~zo`>;;MCMׯ}KZ9- d^ !T(!$ ٠YaKšKI~Losk 2"e8H1UkҨfkɳ+EKDg(pFJk'b0l($3jo}kL67,?jٲ^V߹X¥,zIB$QEPI 2a$=`аFy0c@-')e}!d%dm%ːxTR͐,f؀06n|6-l' 5n})o~],sFg;:԰ZaP6p![>YDVXSdɑr&ZTYct"0hJJ%+]\*t·zwO\T5.6lyٍ9_]x>|W۸5zuD63rQ|, @,E`үuG6YM/@v<:jg~b7?mᷣy}q>jt:?ֳ쐅m &=iQu[4)NΠ]Ĩ:Q0X?j9 h:TQav1XQEWvS[cIk/>b+8clR"(b[y.QOn{h-yݒ-s~W달-y7d$hx$hGIp>OD}`Ҝ0n\%c T]]uɓI+ms\L_zYzB֓`Tm92!erB/,0N<,Ӿaeֳ>b|1hy$J)V%#YTd2; U"[RJx=Q$2̓>fy>{<'O) :1hr=@?y}XMtVwϦVzOևHee:Sd4ܝTkz3YI7&+9d~?~nͿxWatɇOKًed츬'LT4^_UmWW][Ok{H0wlяd&ٴ̻ M-g»' uV_Ŵ^Nj~Z ^l'}1>lf_AjÎ}|Z]%<î9wr}7s^sp0 ~} g~7enc 3UX&?>ߪ7ەZjKw Ҹ_)}:́q6픚NR7Ԃ".ꌾ #?ZEJFOf+@UO3뚽57FW8)!x?kp &ւgߞWXͬlq W.צ .Kgw7f!7`N(o?~?O䏃p?,?5 >?3tsL<1viTf+<߱9Xƀ'\MK NG'a~ON)T޵#ۿ"ݴ@ً\eA"yԲ`q-[nKj}HbZh#0"MgᲛr"1Kqۊ|U;]Ltn>҃U΢XN7z}0J,e?OR ]k`!8oFc*oh`Q/FxyHٛiZD3KjV-7pҗ&[0_8WOڛ@#RS1zPy DFCieҥ\Ƿi[Ȑt"/utP[jxCg~KeP6y}Ս:WI'/Z˶/U<[vo&%m}7?4QΟ^x֡!_= q)͍1܂aI9&>rO\H Ѱ^T'%ߟg!~\xj#T$@1JT 8)i4!w :___0Zn"`ib+R`'(O1 ͊;jl/,"#ֲ4;䉏N5,AC{N܆/Epi(~w-F(pI\"LpK5IXS "A@~<3xE&` b2okZ8`>?p&{I{0^kU3?4\$ ,d:eH_xMcD"$כ]17|ͿyK򇹙J ~< >2ePr3,`^dd"9 0<=8+Y1޼m- ҢhmVSwfAt[.KjuM~S^Ov]~n>~QSEx|rOGqr̼O|/~iZ|bV.g#ї|5k^%ookr~l." TaQ}!V0#(bFJt`w19I=$8|F%wg|TV'`f@u@%lˀ1T)b+t@HmojRz\Tknlm+o1PYzpuFn&>/)4/]i; ut66trfAfJ7ctv70n<-vGr؀h-X9 ϕ|m͸m>Kp 2 S|kT*Jq*0 T`58D *+ ={z^Xq/,GHjLE1x kQm0'〣C!@C*2{ ݧCh3'.UIdOǕ]@M 2*’_5F1|g+X`+u"!SNTpF+Χ{|:_I{uRs/,N8X*HIRK5w"ZC#W+Mʠpl%C0!B*ЄhŘZ@,`ZFLjj@QX,gGϱ')O{븊4,ёai EB|~*^uFhJ|JcNs՘hŭcBJ #H>p" ,*$!@=$ťoBfd j0D X vf, WJ2͐yEP \xI7wkmfej/[Lo Tf2M;glK5H+̀)#OdBH&:"#1jQC\~kaQk %6N0+D{CJGt`ZD"rR>(g;co}abұ/k㲬7v8ΑA Ow`I\Hp*HR˸Ѯlƙ88f`!fXю \@8GƦOȁQ@*̇ll/$b #}fD20ObaF Db'&&:vg`#OhZG)u8BЮmC2~XG!1ԍ3փj` ,iPC@9b+̈lgĻCme^u}yE:OI< +˹bqk/"*^[:l.}b>3`(lK&w^#1 :6nq'D( }e%2JV:T1'}e߸UN5}ps7kRYτY& Iso-8ZN D*6p@I>n{tb+6M`(9RR$8".t53U J1JMrEwDGvBU2GR?N`y]aIwk9䦞s\Z9 8=P^o/ƇVbU1PYHŤ^(Xl2bIyхߣA:_LD2Uנ~õ+ՏՐ{{VރtRܖ|̢ u-PW7N*Fx"[Io5u | @ӧIi$= TŽ.[(w]NϲWShhH>^v$&}BW]dlOQt?6>Ql*Ӕ97$ڷ>սj^.8O|6{iWNn^g[,TGUMM4_}]cww]ـ0NirAPS8Mp>8M(qwz|8#Y)/LWσ 2t<3 tc.1'DW 'CWJ9Jh ;]%t pttk|2tS+@0;]% t B)iWav2tz*tRwJ(LS+&8Bd*T*tP|tēV637)G[Q9bIh>mZ8ۓNR6B DGA3_?gޜVtgI lշoP\Oc7N TMBTJ2 ;ףZѵI,a&rz׺.Kq13Kew(R9Y N*H 2"=8ju^l2 sz_SJN`u:) ̀2Уn~=,%,XWS+@Kjy8]%yʫ+%9A*fsoB+H*]FRoMq(]`yB Ud6Z{纄RtqRHqt<랅V:}Jҳ tEw"jHt5!'CW .BW -}Rҁ^!]ř:% sO\v2tЊkW ]F oV,ƓMj3vy@klɛ?jbd#FixvŏtV/.3ʟHvU sJE*Z̯'!"Gi:2SnʧCӖNa- He=eSڤ6uRե(.ѰWX1te@H%zfI*|* րTVx|(dbxb`&71pvťC"/$Ř zb\N ;yEKp3"'ӓ)f I h,b΂wD4Hab(g;1L}b#wg}HCz[&vH2L=^[(=J7٢}vgz $ߘx츍}$j*C0B O#F.0*@ 'aRJZAP!dYJ r0I \Yc^¯ì/6}'>OEǜU(8$ehg)E-p rw$&p[aMQӸ@ǂ[u*&d RKT-齊E]?ɶ,XD"ֆ͓s#JBd4VZ-b~huDO#}xeRV44z"~y饾ɫ@{e6~k(~Z7en6.WsvNm6{6,I0R/ٌI2AII+A>k`lOUw&)i1mؖŪS^2=b8`o;Eٵ$չ8x%6w tTvATWdf)ǸiZEӘV{V^5{?$~eߏoήͺBkrr4z % J|c=W[\=MN(')ꖓ LRw‹$ +$12 s8g3!PppuJD piF!y*29KЌ*hk{91;EjJ̸&Dq\` YMO?gKl\KLyqFB Dԧg9k75oS%w4ZMNP'G-cV`t}{0,:Q*`d*h<)i4! F,@J-fJ fG[Y+CV}4ޓESލV|ɵMEcFu-5۶9GW[HoRn)]Ye-`k U cJ#68R$#l <Kv*kѹL*=cn ?\n<m 7a{CVAN3ޗ]~7oF:M}Uz[MduE kF{Qw=smn_K[:t6Yi'M#eVGP*k"KTJj0(QHk9,O,66uQ9g2d0ڠcQ9 QU V(Pʍ"VFudXDcZ)"V"D48}r;Jy HB9U ,5˽)3fB 'pg;m׷/$4^]I(^~}s'9IC^9O{vV>K_ !/CLfl6Cv3-j>z&fg@ۯ t_-⊔Z+˧'`cq;M^Oo/OFߟS/=k +꣼%Kd*O*6ԁKQz&2tNH{k rPtJdR [6mjd'WX5C~YI:mHN=> fZk?;n|d.5/` Z(2GR4B^a擤fFU4\RM,; 28o_År,{L~N\=em9ukA'k0}Q@nqdZbm4GئlYAƳ]v>y1V}@Ňxffp~;La⦹ ՞{.:y57fvɷM=F޼=Fx/sx? !M/W@{$A:XK%,ap,F*X+eI$ EheQS(##!RZ"ɘY"(C:Rb&y$ViǬQFYJy>%3k -̹idJ)vl⺔v%֨v&ck3ЙOM $PV eFP\H䝦:5`5y2ʌ*kQ /:|PJ v[~HFǤ"9̰40$ !Ayuc1# 0 }j.<GbC۠?Pl:!ęU\Th"0O" xq^п#+Ѵ !qu1%S@F!:#OD IH:P!"(uL/G93!rUQ!wt `d]iBޖF+M !3* z(|z3 4fqo5p6.|zV=L__s?K?L3=?}OߪN;] @ BY;M˜?|4#oM(?N[8AV'bPQ… ~P[2|ϧJaRA#L9F'Swb*\I5xul*aU@sP&J\:$UW2=¹ E{^_$ icq>?_8:}nkH_| NO?*1GH>(X=Rͷ6 @N\TBJ{#LwԌ.ɵ.'/fˊ `d̕m0l^sG̦- @9Ck%I;Wb|JW]ːe\fY^0g˛hxtXsp5ٜFsU֯nkծ XJHZ|94 8m\k/L*0ߎIĿqחvtŋ_~~ŏ_/^]`.^ū4d=K7XZ4.MbiJus۬+rǺG!X77(=/>\JmGhYHMIJz ~2#U)2T_ރJbđݕ,DtQH~/^"(H$G,sWxDj#1H "E<"1LAq *GDSH EFzΆ啙z}H+24F{ `#Lad#{cDYxa^`P;N/w:՜7|w*yj1l`wkVf]i#G6J!^ڠLiL )&'^JB)hnOj)ܵX vSaeG95ÑGʍhK" NPFagp6I!}kw\iٝʦ; 7W퍭hGS8]ko+ݴMd`?$bݛO Zd'YyH#Yg[$nbJoT;[41?>,kqLtׄЃJ`*-R .]iA&=)zQ#B̘Ͼ}-&px?;?v p_t`-2dlzH9?u13"6]غLg 4;HmH&\nMQ˽ë42/2k)Ebq3ʜ"@v6iy ^l{wѧG["> +oG 2rf46]o-X_v)}Nm"El*smx",Z'KD )>{n~Tf[sU"fҫ{ҫh &Ap$ /n L;k:lUӓ>o9{S];DԦ: v/#2y-yund Qȝ-OQE![_E8pp1iJS$$,׽KeW.jbwrJQԏQÏaɓ1]" eY8ɩiA7 (6HÈ 28O9#Z 1hE\Ts\KMޫJmo^cM2<ڊM1PPAp%C@(LєKB{ ;NcH)F(xN۾毫ՍKߣ8䔲WŘ 2%֒m`R61X!neӭ0^X{ƥїΠV ɀFyb@ C78wZmD\`:64e=no\tG˖G_uMuGQ>u2{]yϻjt~7UP0zȘk'_:D*?L\U:7Fb#N0*h%RʹHUV^rLl@zP&XT03ĜC R !).l"\ZوT(W=OƲ cx:s[]:]COZg_~u:/Xٲ |h$~i\C9?]B{3LGmLFOPON.Ϋ ѯ*tn.&~]][ n z^PE !ؙﭑYyou?|7;=>[^i|TVNL־G=-D_~ m=~"1w{Z!5aU ;,%w zuVHn7ruWBRcbLn!Im#w9 P>!{"_G.BUB+Tk BjAPƴ#FN .r!E\/Q+_ Kڈ8=!X5xحBq1Un&ŇE#mMa ⨜سub3KA'r`ǝ΢MNF r6Π+ k%9kvL21 *$!x0XiM'+ mq)A*ɹX lzΆ?-UNe3U2"6$?+B%t&`$@2zO i_^_=xzK F&Mg\רVhǨRSRbC4`"f&՛Jg4I#I@O>p)[lZe5CHՂGI嶃)GTE/ܢDǼ %砷11'HYcKeQtliYy9>!ڌhOUh2h^`;?-]+6)/z>m;SRE'<.XOu6Bkg밯1z>ok F+H.tK.;7cp!$Z:!a ONf$uw[UFUMRcRɫmSJJ/ MIPb0SJʖ/q$Q[dmP` Sl`bP%HZN*Vꋪ,ނVIy!Ag9bCZ\ :m 6 /}[,2}Kbѕ WsAba.&-u)H -2iSܿ RP|U&`R(f/T\,ZPcp*Cd" K0ަְElʗoP9pѪfB]WZXʥ־Dա*T,YX{ 2ڇNGP7ȿvQZ=iRCda]tR4'1\SːPK+N66 տ}c|j[AS8pO@Cp/ݍG/{j;R:iEsZy C1|iO lŪ 'j#p" "֔jS]1SbSv~ KOR.Lt݅/KoxzyUhUCnt%(7]t͈f3ϻ$jhJ]Fb7"L-^lr?!pps9;=H.0ylTOx~&/n'},S'C9 I74ۛ8?>,YC#kB݇}xht.~k߹m;ۢikVXPn6tpr"ݽ6UaϕP~ӏ߿1#XJGN[2ŒJV}u1awvzrX2L4L2Lȕ#D1ٻ8WRFfdFÆ=,|ޓC~ZRLRڑ~Ѥ(JbSeװ!@~⽪ M- T3EV1?6Kydպ~ 7K 2ݣc#ȏPwQsp6xn9XA6UF~| ܊,\03@j;PF>Zgh9 "`~wԛ=G߈te(sۖ@JޟR~A5eSZ4̫i[VܡӴ͑M!ѕ2YBW@3JJGzt%i21 mC+C#]= ༞EWZ h=s+Mrh htepy5Ц|te(s<C_8q.< p~S}ju'=m7]*걡' ه+t-tehӕHWϐjzz2!:]J#]=C > p2BW6<]J#]=C3e"`z]]ڨNWRܑ!]E'P'70.B)yMW0l7ܭckZht4m(x9tï핮 0Е2/1J#]=CuEte5tepjZ9x29UӚ]9; NGW@ e8HW)Zvz] k+C e:.Uz` r$cJ-{mntM%c٩?fA?]hp Ne*Vd pN؀KέbZC؆2>GmbCI\t+Zfte(HWϑ>!G{K + -ҵøIh4[ͅC:w庽=իWIaߝn$]bh/eObutAmK9?hyewǻe9\]_\"Ydc2wG_nm?+§|2ol)0)黷R}oa-o5*ꎸOnFsR> ?Ejj{-a|޽^[*TQ (>1۽1- uxBx\/OS~#>"go0gޗn@PƦ//ڻ~6mtZuyVB1A)^"u_)U\TCLPjTO9} 8 ?m +4./ ܜOp$? 5f#*⪖|T#^CQm?5Q{TgtJaIs!1-=rv Y"YwN\e\;/m]\F:["YkEWSU42R҇"j$nʘ#@+i:ffJ Tw`j·f=i'681EP{V 䡱\PhhM`G凂Z^4KR A}m\8H}B>.]]E'J[h-dv]J9dKIQrYf*ƬR%!\*}tufh!! X)}J}[0ɻ"ia llwO,AH*J h HrZ į&] cYqY]Au\nS %`k6±6jk%(ĭd+2aD3!]ώ# r0\$7cߜM膽Zm:PѾ!Zn9J޽EFǨB.֖}Ae`' 9{pa ]=2UF$v܀f085؄ #t`AVࣜ khQ*wBe&{|3cE9|A3Yk|_Ev(!. P;RgfH57sXfD#NPԟra&شWrc57ݙ?y;U0C Uʗf.0L63+XKi04 H:p28Pmng%KU ZI}@hTf9l:fdl/+XX芼HaFR|&QL&zZgt^Pe8Wc a2Xh<`{K.`*E0}HL% uM<oĝ Q~5puW% 95-U]tne1!w:2}oCߒ.79~`7u'p%J, Sе2"{wP$j\6y>&I/y"\~0ㄯDt2ݯEzxO{B6RhBV3ؑ>(:AFx-6jw ,hcAX0$r/G82Б&sAx^Ib&ל(idۘBA&qÍ+`<|ޫ ˹t>V BUvӡmDE54= ^g9L'p<Z1M3ڀJw[wW.B\ iW; J#}AώO 9u PGkmK`q)=n~yl3~ -eګs9m`Q0-CI6 Ņ6T6`_;#wqFYDݶҼ]k-YV` e"h^ (y_/G#**3tÁwCQC^M#^vts]^`7:zZGh {-JtVFDNv L@M+ty eP3Azv^hlZ+tO7''x4D>ᓇi1;GҐau™K+ޛ-j efDOI($ U luH?<>t6qT7rМAAR [/+5 tBɷL~5!TCLjp ŊTm-\S]Zg6B9<(P؅cAp1;w UYG8mP fxkϰàe"f@F\Y]rčvXᒑ8J{:g(8qK[RwD=CL|@>` \8 `%^sR[WӥY " [m h6@Mv TpH(0qT Kka. uՕ 㞓 _)0H5/Kl!΋kqR0$ٰm7*Pkgr͟|йM46iއo.vM_~8yek'z{#)n؝\vv+綢͐^|ڋ~؜㋻ z|zӓ;~wu%yyN?ٲ‹H}g-POO~NOjts)^>4>P0X^l0~i L&.o<ɩSg\{H4W~$Mx@N֌D:e&%-aKdꧫ ˒@ 7+& DOegUM ' ,I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$p@ YP$&1{1m_QRY@CLq7;kH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 4$x% Inb@D)$@!&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@MECII NN(ZcALh> DIA&&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@IݝZ\Sh㊧Y wA=[,yjuy@$Ei$4-KG54ES ]1%㢘%&)S] PWmrtYNq1IfkSO$-Nv[<~C Oa:{4z髷R{S`z#?2~C_o8:__n>:yJNhk,IH!Co 4,Fӌ b:L4S&5s Y;;N]1m)SR] PW)#zIb$̸Y;;rWu5@]eY;+ꊀ]1ΎicK92+蒤!6:1bZ3ȔAGD_E' w]g۳pwWE^v验VPv.UV]=1ɣ ]p6IY)bژjSf-˓vƆi}t1]͏׿uӚ +ϫ|Bjh9FSu^QE߆pήN Ή-.Of|JZ5έNΠiJb}>YlJ`9͇a'}L?T.74>,.fG\}4[4Dű'>ѿ[{Kڊ!z^p8 /lǍ3vwM!<;;>nnf` rvӰ@^Bi!?F!?xSa^ YXX603cB@02O|+:)Y^%4q@zx[^jCPALq-Jb3SO:͔=.6^H8,FW+GWLv]1e0*`IbqѢɵ)+̝2*KR!G'P$u\ ;wΚ,H Lahi]L:#ɩ^pϢljqT E.p9ZF{j]Q6"BC:H  ="ZD-mDTu,r. ^'/FWw`)U] PWޛ(HW I7AbJ ,3 i]1nddkSf5:k5A> q #>;O$ 1Y׋3-TΎ)v/HW ]1nRtEbbJP] QW \~NOWQΈ(f/EW 1v]1eNu{_IC 3{,]1nӺ" 2%! qNRgp3=i}#L AuRtf3V;e`!vꪐ6j]_WϪR6D^87FkkSV6=Vuʀs8pn]Т1DkNEGOTA"`tN׃]1m^WLjYAxRtŴ!׮+-z]y̐@88#FW]1m)j ƛ c+^ i}ѕٚad͑G.Nr0]?h5Z ⅸ1J41%5٣銀btŸI> ‡)3lSΒFD qAF]WLuDԺ"]1. mշ3rt,z71L͊m"\hm_8(SeC @WNuТ$pt^߬v]1e@u|銀i]1RtŴ!Ԯ+Yu5D]!5+ܠ |glEuͪytdR֥y8ZodRT1z-Y{\=[R'eZQE7wej핳 ׃ b30:1]lMtWfJ7Cb{mWޟN//ϧ7Y k~zw|{JwIWik}>0Sn27x~w=\yCu]UNui\'c18as!zuzJ16G~忟N.>j3dxśo^3!fguѯ =09|#ס3Ň_xٴ |:iG8N>T_wɼDΝNfzԀ)>W1ιl>>`OY}ֿ\韭ap>th 8OY<ݩ%!i3/FK)$RAݭH! |qw e!ZϰV8Z1-q{2ڌgՖ[=-=N%-%=aï}݇Js S*Eg5|7cjGI;ugGy=/W_|;Ϗn3CL_k_Ŵy=5 +Zu}%UZ )kw@j9|0qw/9Yj{B9fuKXӑKG&M%vwQpzo;^W;~ `"ߛϼL=һJ=&CYPg{_67<˂:L2e;!KH;[8l&8?t {}zEqV_6?z7^^_quϫR=:_>b1+j/m;n?'Rsq$[h3]7.Gߍh}bv9s"W{1 v_/RקZNg<9~wSHU =>_m~ƣn ^iɂ8?;J*9}1 Wd&ǔN]r ˩-!1Zk8^.rSRv\yH|7nӻq }7b{W\~9^|zO/8۬ * aBqj: }xkot>[@2g],}K߾퟼-}ß>RYޞ?Qv) (FW]1mN)1+ޯ*V;cGysāk9༓VҘ!,MbZLvMeN5=@M٢ ]1pbtŸikQv{B*\+I7e)bھ9Ej8['''FWb&10u9tANqc+M10em᫮UسlMˀpcϭ2S<+U(UP]=-N,Z `e i1׮+6+(HW ^]1mt)+ j(+_WL@u5@]fL^8#FWżb`jSF؉5W7đ%P{d܄_xpW#)/dA&`k׊3m-͔tF4 INq+TZufI8i]1.i]շ! 30v!}Xb\ʎic]WDU]]=>Nk|˛BϓhcO# e2]]ECM++ (EWL׫2TWԕ*G#FW-n+( u཭z Ĩk])8W=ߜAg]֡]cJ6ՒaGY)#q׵B{Խz#b3nRL[Am]b?K\G ]1p3}XDL]WLij 6>5%B3 L|py]vFIJޢ[q)x4STKg ]1pbt][sF+,l9~QrlVu*Gv˴U 4"J'lfqa躴AW,%V &gf͊VɼgS%e3&s0 r\=f"w6b6G~\,,ߓ=8,>,_aa~w_]͢|Mylr6AYT告Y,Q$14ںW8?rT> mp^LsDNbb9%j ;K)qߒO=$$NR*F'TjE|Nbů~K#^NwfOԷ<. p9aC)cjهjpZ2z|[xE_vp;~`fWʟj9%6Np~1^2foK蘉Ag<V{=//:buzG7YǢNA`GVMGiJb[_Y)7xe3$*Ac%尿=Y$zҪ=u4qIZ:ڴukVo3ք:X0եnѕD8_Uĥ΀׊%83Ii0ݔUqB0r);٩'8&òV-C9zyz9MJtJĘnI"4ڿ7~_6T{?_Tg}Փ ]>\y/b F"i gӄ`N[ƴd{K[L1<Қi [fRZen[ Dz<M_/vH%1ӱ10&5*F*$$&6S*b"R9clr ՎX&6Yʅ)8cET-%rT85~{ 8mΔwz%`W;NluzM]9pM u2rT֝LsRLb%,2d դ gTY^%x:Sw!HMɼc6_Ѳ@r0'%D K8ŧ\ Xj,w$^dJV\.閿 IE~;tR'"p8kCk a8nLq;GՄ'%\W;88k I˂vu3!F&NJ֬Jn0Z]d3f(b}/#7\ g h%0.D6d{7,Iv 2hv V]VX[Yq58Õ"('oݪ2=`~T9EQr-_9fCZkcNPNrFa3-H23YT*O"%d}>'`#x \fΓ^,G8z9:ã4+;b|dA _ Z X*_}^%*釧iɈ,<.=EMNX wxv/qDuC(V2BA@5d%2DRgd[ 샋R`{p!o\)NWQ۪R]Lb- VDL sl$Д4IysQQT1O2K@ckqx%q'޽eW)uklzPn=v3bo9uLq"'D Bȥ4brp!bhgJ{\RGJ=GG 5==K^R3OfSdmỎ}};ζgz_emV4(L;4}Ymy;g$%MbLO2IK-Y"e"d݀J na& ՙpVѹ`XWFdI\v `NK M˪kH nuCY(TTh.p& '8tnu2&]!q@Sr@DIˤ&hʊgT˼1|]@9iwB $ 򫊆<w+CIN[M&I]PWtl|&`WCǶܓRZY +`nf^-vn)Pg_G˺{$ ssFZ@=NPwR1'X $PV8"hҊӯbYJ.fl9eu1Sƕh5eu#X0qӤy{:b wҬ$f kj[+pr,\Zll9avpe +s_!'3 ÝW.N%juծmJ :iYu?b^ ̢Y` t阜̗4 Wք\nDw89FnBݔmekyNN;UrVιnEBҘzm6GJ/ݶtO3<*5нpMmmj.'ºS-33#⌘f`8E03ar"DȮV*穤fSRrF6&r}?/[syL;ȣW5xn"O7Yl^oy6{3ݽtݎeY(ons1)کw\O/g^;ym4DDp{? {>L|_Cҗ}oo3xxU?3Yo1 #J%"OF?`@u#I1^74{Yo7Ų31/p Dݯ L`ᧃBm8?|*V/(uh7nθ Ц"K9nҥ@:+p}THH TI3!f #J3e$RXGR$NJUxY/T0}'`PRn? սpmϏy-=$fIWp#'%S.:%9j% ~IQ@NJ;jG\EvfƂ H%0`:BC7-sqPB7vG?p& k5sӒ,*em}W&pS&%~t^E&'Ji+#9{\z6)dyn0 'TK\ k_7o'7afb?3ޫn2M|4^>z=|0:Y 6?ngi_㧱}aE.s L e#aiJ& IۇWJU0Uw0\@*>m슪ml/}>6y]2-ۼ_xZ YK]*pw]O|n;ex)P۱[fxMm3Q@ yr5]Ryvģ M֜xVrO4k5ؕ9٧iAsiؤ-Lq kKP\(WSӪެv(LJbcOuJ$O@G!yiSʱ< JN2d b8K\GMbK稡ir  %YɆHL*-jSp8&1 ^ ^'ReCy᚟`JI9Z5T5oflWµ6_:y eD-m(yЈQțgͽ<#'^}k5wݛ sn`-ۏ\ d┞^ɉllQ$Bm>E"zfCx;Z>tyƏB:_SF C;u;"kH:ÚNb^N ֗^.*:L4f.! ؜SdT4f`}: 6_.0\'(D8l!.CH28O3Q†ku fӚ|iNֹUC`f@07&lsC"D(kt2/7V>5U_"qu;$Џi"lJUU{ kXeVS,mzlqL{x>󛱔(Pן~i"|qҴj*q&q ,N !yL? [\5GaV7.DBD)#QbZbf2lI* gCu>,)Na=Kv(n(|n|-1%Q,n %[4j {~FـOrO5<MoP:5Vm۵f ǍtMZ~*wеP`.fkFpM5rDU5? J'I#9y.F}?8HJWzW$HD.#Gz"`2`>&|8_˜42a ^vʙStx,0vy}b6n'닃@f!HJr qr&_y68y}=FrF 7UКcM|)#_ WﲓO\QU `Xnd4+qp&} y``2 $Qr۵\˭_?ٔHʶ, @nU*<>gYײ=ͪ?8?0KX^Yo_ER$i` Gp8OGqU ̨`)E IRD Y~/]?!\_j9+8[9#1bAI/&iu77gJx&՗Hۏ E1)Q4|&Tf18?[]WBцQ/קZmأ8N&H3 PqQ"B6n; w.Z;ahFͦa2L2TcXc@`!o)`-UeiA(lm-`mp~inxxrn$ۑ @,2I Ng2 ACyx%k ?Gkl Xjs՘;Y[LSM޳:~-YmY"R!≽DF@êeOl`+m `榡1.y?6߼J;zy Bm襵Dx43!b!@u6v$1rveeXObTbEh*BZm7veGv6 Poa1&Z8xbٳi v@J!$H7^~p%$ !(n͓ICopJY8;oRb ݛ;ċl"^IRr51A\@q[`"sxCx(Ô'9Qp@piB)u_=etHl(5fhQr4qJᔊd3^1)7-N9yJ(T.cC7gEoXz`* G,wi'Sf,,ћv>B@.& * KS{.'0tk~IЗmv/!JK\Ȝu1@pr9˜2P/Z&ɺy~RYrpǯk~,HhU.cUΙ #sVRs*7hdnfQR:LHqck6^q$K8/2^G3|*Lk3$ǟH A@#NDjnw72Mr"B}?sN%&2csh4ZC¶S9Dc]JwːPZH @;Csà\bco3I:%cX)8+}a6X(Gj<6E@ElL#Y9niE:ٕyhmit[ ! PtCKT+{giRR9ҲέHHNPv\6<+*tg;RhbmwT7,ծ0]^r'VDlI[y{HP0R MHc mה 6]؏EoQBΆɀ%ZXpcӆӆdFPPqU6{n~|D13q 3qMf"\G}vG+򩭦PV0tvZGh8oI<uVYt UyCdeEgm{iٟ8J-F9˳U`=FjMDƞ̴4ɽ ĠF&&S_@ pdX }2F:B1cfϬ:6nqVK y &TK_ZY~Y;̚]-O,.Xy_n|l,Z#z{%'qc!1YqHȨ 1#E[N8mP̵VTw,Kڐ=)=Z VVK4Ij<Tk"}Q-ʟ+-z>%,`5V. Vys# qa"#Pt'Z*Ū=.""pd; v3H U8I?gu^P]5]k26E}Y*nTD^u}Kegs$+&*"ϙ rbOAAe(|KY)jWm,Xilg.U6qKAɻZqwyyV3I%CvK1֢tS 9u1q# 6n]9&D}Sr />m+VA*S4gݢ䘅ηu" VTPS<~kX_5Ɔ'b7ⲅ+5O-KB0~d$" $p@MFVx*E+!2|fk*`4p1+Mt§êwDœt*Tx;Rf~0o}Ъ.;ZjY4~GM_[.:(XPd';tUIeg`w)H=dhFqDd0uCGֿj*J}̇Q / 2A[ aXs!u VnTo]^+0 HE!OӸpLAh(Ez0T*-/t;12NNg~!Γo ohd~qDFRЀ˄m~ʎkOIvUSƒΝp%JTBC]z]_t$Ip__daY-OhVinn1yWem.Orc9ӏ ,C&N͏wsAU!zIz𑗯D9|5. x8X*]0%)MT^NWIEB}Q40ڢO(WĀÿ 0w0z?&> FN{pi:"(_VWR~hWaz$3ģ%'8B2Ct4.\_mT 'wGalCP nư,sUb N'@$:Y<ͷ>p|{͏%, Z}3h \ 蠙,{^?{pMrUw}¾+9.pp_!d24R` {=|j oXEu H-&[Y_D.X1 TUK2v׿mJ1+½*S*GГRߞ@PJx'1'n6]sN^^sNdӊ9`/?ݫvu0Dۦ?3fVC)w3`fF &ScJV䞧.߅fL 9YL϶GUиC21yFX)K!UW=SԆP&vJhX7鼜nG)`<\Ekis6Mh:Nt'Q)U?|v*3\R16wkJOHLدinP!|!$C-$=VFG$$#u y;rM\ :ye`g/4_K.wIu͌5G=IC!RA. N2D^b0&S& $,#&446HV''RMa=xYt`AYASK6jf5w7`(i"J͕K&0nݏ+DipyyxDޞi ER-Q E6LdW|$ij;s⦽h%Ӵ 86 Xxi -j0tJC:uIAئn>nB-w7qǀ$E/zf~S:ZDA#:!WY,J,Td~X< ?|{+ITˊBI~ 2Y7Qp8Siu<#ķ]Wq }S$kE XXaQr7&78"Zî&l4/"HPkMXymqɛ{7y1Od"`8 NaDe#{lVMqD 4xqɂ)R +k HRtpu1SҢc ^'*}$S#~(HjvW}4EeI[Wq!1{> \iqyOR }B l"ӦlGKN/dFWˆe?Q)}b+)Ŗ@(J9q@y=ў)/:O ,TKTBteЎ(HA%eE{KJ S2kVڗ3 ֠yYo%PepuE_՚38}wbZwtu\XxE)4'EZdvZ`OaR=m ^nuvFw\ێ]NvlL)|g:_=eR+fpSIlr>G`r9\M렚tg jv|Jfp2GR99Rv4q=2i;Q#*5&x/⧆Wx8?LpK#} ō2 h**+F|x:IEZY_Ά1Pz)i((Ϳ}]9hW>_٩AC|Z3|.HӤrT(/-C;6K&_t˼+,՝t**aZ^;uOV /p*h/d=lټjc[T-ʷpau,軲<`fYz 0n|Q[T9Yt/]dZN/>_m ڞ,<̾Sjܐ`* %OYTdNxZۚ|3q 8ME^i%Wv${M$5 H;&M`hSҸJVďgW vhCf:sk_*{05)q=bAz֦V$Ea;$B\m`":=Dj jXT8LJE+]F*Ӝ\TZFr9ww1<s{RVguȢ`oP/0."l0}rI H#Ӌņ9΀GQ h^BOTG7yԟF'n8Y/pڨ_\"1 e[ KlpS ~0{WBh{R#Iv /I2~'$w4~Dҡ֕NR >ZHd U°+q4Ԫ:* ~I|ZߏPA=\ W:(t<4׳iϯQbX$:_bp~6b{c{jtPůӡhTQگ'ŰNLBٷZpP/^(R!7nHg(Jю}ۋl}[< p" hJì F÷NYjR/W5>7w{Ӏ %:ƛ$K"N H0Wa.s}.Q5]$OR$O9Fյ5s %uEkW&ڊtR4-F|xey)"w6nN;gԐ\5wk1ɟ]o%PQMv_WiTwX CA088:Cpy3]Vh ( Uk[ts=zJ G@)ԲSjGT“4k3u 1{`904uuŻكas YÛx#ࠦZsSeIE[t[͚d?CXĘ0M"htsꂒp-W#IhiF3GzX iЯaeO".o@pȺ*hwZD@{ǐ ~­T L5KzT|OTt!bFC5b*$%V^ R,+oy2tȣYiƳ>M-TtZ {R3L =o? _%Z.X  h2.c(ݼ*=Iš'8$4C72]AP`kI`k`!peeV.j.G+icԁ+DB@eBƠoK9[6>㎿(H4CL3,1oRJg MJ_zwU_wIi? ۧdcw~ )w,AZ\מom-u:: QKߌ6h)NsO%5E7K6 yq.#QoxmK˖֚FO)^"a%6Gf/~-AwttyZ#ށSYGZ,5כ%"K`9 JwPj9Eyd*BӢV8. ,ppb[s>WTRKJ%7T3HI3njV+Z{&e܊YsUl{k{xtZROxꊚ }F1ʡVTj*1hG e A VhIQ(~89_NVk{xua߿bew*=Y_ps>?(V '#%IG3S {`%oF4gW& ߳[߬lɁ3y gm[p}e#ɘ@Dʤ:KhNj9Ԩ!Z[7xbnY9 sv %|TC|l)xTiD f7? U h N) go[?ӵvK۰q[)vm=I6VE^Dq=bf=.SG3~|F9$&JՐ-$}d߂-FriOw]32WN!Ny[GN&5@Lo?3i$- K:#"_O%ǃ bWswg[xٟ(v?<|ꡳh}.u*|Gn;/YGeLPDzrےBUgٷƌ z4ZV^]Fd5=wY"7*q]ۗgtnx9?{{v@,mUroZИNS:L#q(T1pJo RPF*9BCi {`h&>:qB9vn{Gl3 /m_֥ek4Sg|]A$u;IN+m1΢1(RZtqxe6g_Z.TtiUPkeApu>NaՖh|sO REtc8^9¬VZp֪sǧ 8@..s2 ѡpr`+jEy$4 .JRI.(pZo95ZW0Kh9c則"Uez2r>[^ j')N!֋HT@Oh4oе42=nl{2m+"ݶ{.Ow4d:ӈ!A33Z8.dIj!cܗ:^Ve#2`azJ o.H'@_؍ B%#=.wѵhOC MumR6SE¦ȲؽqT0{@.upΔ~P*v 8Fۧ Q]^?;w8< cH*8X<Dn` uckS*t,N sԤ3Phx-!SiT6ggDv2 |4f/ź<Ԧ.8=!}`$>'ڦi#:(9J`U(8yY2+VGy4) E'xux g j)m[K܌$}C"PiخT6J$2>1 tFLOH{A`՝gf:qRI1O'SgF6~  (̛qGp/w[Z GCTT>q ~]vU^tٝ&Oӡy; ]n);j#-6sN`D϶.JZe;d7$_Z~Gu:M* h$p\cutOc.+ dN&^jJ$y^Û*wJ"0y?hSVؕQq`pk3}BerVbH.؏Κ9 Pw^>~Z ^ IV*[|jL]XKݮ%OD)zH{7hYg'þzo[bQ,zڇW2z:XLA7ÿwgWbT(Й뉑aJq(˲~ǥ3٬EfX|UepT^S^R|sAPW\ڈ3 "5,%!3{/ˀdMr~SӖ35";[5vDw: -t/5sV1[޲]s[8czgH)kK+kXq]$q;X7ŜPūL"&9dSA\iNhf-sG:izc!1YNlK<DDXA69md*Nyvq8MfΒᢺq8\"8{EI74ș56 Ʀ˲ݩ9a-G(PIN2'.:#`Bt0$Z7v-)\#kBG)9:^h<(, [P9m7GLU#< 6 p jrsܵP7P* f,OHcdJ7> f!"@'%/D4fBK ^W臁mBH5g@_. nFi@2XGw)P^.=T78V]|ך!WG|©!cL\*~ģ.RT Efy.Z y:Mօ\$犝CVȸ;w]@2_}Uv{Pغ u 8('[|:tns&Hjp l02$Hho]5't+ FYNcx0 (a֫B%1gJb'c0+J)Ek-F'G 0e\}uV~]]˾ _%x$_K>Q*,·WRx,^ v,8=2_؊ٜeVϸ_0auQ[ CFO4Z4~F!c̙MSYA4%-eE~zG[ kU>:\bQyhrcC&*L7M!JZKVt8eF|cQf;ՙFesbYy{{%:$O4(r8 4/+mR)Rq,y3Gk0ityOMRXeXYcժ\*ٸ[Q 9l+t\H9YɋPcc$?nS+Lj&i!(aTGtǒ,@xJstB=&܃>VhJyWF B܍bRHb$5STFs)=~+t zoJǠ0-u7IKҵ oMaLC.H_ǦGF8fw>~=f!IK[m!T10ѱ[FCk58֦UxqY8Όhx- Gi0Fv2e3O]`n S1gֱ{l@@6'H_i%%)"ۊGb,Fw>W֐g?<:'dw X!GT+#U.S5 #k3nTron>! ]حg4E?/?:6ds&qU »`5l'?3;| خhMNM6RViĶ+tt4:ہV_*k;clQ+BcԖrP9{`$*tt.MsA6خ* ݵ[;]ij'b:JvC Tx75`wDw;+D[1{y?BG7jcmHBNsa[On_2񤒴N;Q^ Aqq(rS j:4)ę Eic%&bE+z#Ҵu) :T~AFeO=\ vx ɸ֪ Ɔ4YeY  *٠AѣҞ xFk^~APFf(yu&>f 8S=bwR@1NͿ~4@͡hp ^\ sW{;;gAFY~nk0 ,酊8G%e"?/ч_ 7WZ3$8SNMƣ엫t:/OIY>W=z,|\ܠN`l^dZL{Wpwc64c܀|[`Gw/̭8`vCI,>]2u>숓j>zU+fj^>KF(֟nzֿAB%iAOnOC@/ _Ъv.o?)S264~8 rBQM_cVsH%j Lc/Ҕ}QyMMavXonhBKV-'zA`((pV;$>%(lHq~% 51?&y-y?_'? ;d\Ǔpƙ8OEB~#V}VP޾dHfbW F|:Ywɿ&GϳUm7|qxs vUx^S-zmhC!"cLup=. 1e%"_p=)@@ o !iC1En '_dPt; bNo͂&)xD~Wm"FDyg^W@ԼSIzPANub0zWŏt@Y7=H"9#Au#j L)k2Z; pC5PIE>9,o}@}23Y.N9iCnHzpR,_ sGG:}EcMF$m@O/Z ,c4zgӬx⸭z֣"\F{K3y&sǠbSk8ݥ1u_S,es:DE Q&jIgA8P|ص͉s(IKBܥ{:ֹ5m iOkGXgE[4GKRG&p[&Xq!MzR1~AhYdY01 dH&F ŊBJsg7q9v(!;ꞵ5#B8 ؁$ql'?udA^z,xQAqrZ[bg|5*CBBJ8/469z>ƈO#Vxە9+z՟Su<;JșN5d: !bT$CfE_wL0ZpzA,Enm  {% &O7F+Ynq/Wp/W+oM2s~GL(N gzH_A) $f#4JrFYhlWOds+⩦CIH@WueeddWw).UIKhNq,w6:J@AlTXǦƣ-N$6הPIKSsA,J;(ABn9ORw&n0;(@qX)E0x[1%p"PY@@ 5y ]]Р:= Y-s{ίrfU֔eۃo QةY3=)Sy{%+kaUWp' //>K* )69Y-69&gb.kKEq5MZX@bQwes@W߁zt .=N?oӥ Ԇ1^VC(TZ} Ǔ?tj퀕AG*uVRXbhq&6D #21-7n]uVkjmY:kfm7jŪTQKAj|k 9)M_$B{Bo}8['?$|0vt h"%摗&N1,"Sz Zn z7w 8[˳ٱl~S\K-޹|2yT,+Cē/g7Hn7UoaWqxyxrßMwCF/,evl?;B/~9^/nW˥XO7ʮ+ v T7d\Hkt#^PlxҧPI)xV +e+Y\(aywNN)Q90(i!={0Kվ{rCbo߿Oۿ6]ΦF{aG.Kad6'7'{ߵ>%M^Fۦ Q.٭(35+?2+!;&e"`:jB$9I0+2&z AFI ru(,%y}dySס~UhbϜ= o;,7حBa~yqxs#t4V. ȫ7omҾ{?W<s! w:J}u,?X.ts!!Ak-4ףOKG}QT\ FI5&xL5h%e>E&&^pȢ4*oIL?@'>k3&bz+k鍦bHZi?U41I˪ly^ $k=il+S<: HSɡDc58,.2 V,'[; ?8zIq eƁ,(۵ }Bvi"ޅ}!$K3 X>oH@7@.zދ֒U[Z/~‚[n)}yU rWF(K`=fB64Ăު_ʗ,uQK|A3"DJ֓ 5*ЕLpǔD/"L;m6^YM{u"*{xxo%q /:UaӅfQYY' ˖^09ʜi"L,s(de#80]/1YJN X(0kY,ΟR{ƪI!1(,E\T댏?0Y~Fg C2Z>qXV(cf vi+IjN6}gd}(:7=2̗CQؽ6.w٦iZV[XCrlF첆*/|ں"*Yt@">(!:&1gO%DGd!bf"_߳ m ާ2w :_*9kk:)K噖PJ$(8v޲B!JE{ tQ케|)HS+۝VxKL+B%bUlj T*Pj}h8w%pݢ-:ڢ-:vN3SŢ"WNnT+ <Ж_ªqbZ'k .4wҼo#^4'Ӣy@URf$ZĻL+)h ߨՃ|Jp>}UU4%Z+PJ o>RnxWE*o>h Bd̲#`$ [#% @TR''B"/\ZR yJd|N\`d/ GI{WƸ Р:] IBZ6<&Բ+c@z"w-rҰWulSA3yVQ)դTW?BP/+^6*fØYh$O iKXdN12x4o_<9UfcxP>l 3,dXC@A Dc8mD`i,9q![Xjɰ%j5ݦPMb.`&orNോIxȃ8qzHZkER]i(e@71{q2H_K1ۘL"{BE/N/N,΄Zܧj [gb/*UG1ٶ8׬3׭+G (R,Pc9%*>B &$7;+\x:dy/2Q&AtMi0cYI2O:$+Әi UA,iD5Iy>lM&+\Ѫ5[5|imBACd1ї% Ru&YF#@5N3ym]EuP^a^%̘IQ(J#cqg(Q:L[ &")"Dž?9US.4,;Y8I*S)5gj +թNqNs]RmثPt#"d/90:O'[*LOϨpq&ܞaVKUcUva(*E8vw QfO\vm:;Pt}8w8)@>i2Wfʬ_ۯCvaE)T `%fFv*$b`Y2 BAcw^wgG ip|#,w9EC@Vӗ%|a>77ɇdEkP,<0':fa9:o_Ru52ka{a:()_ v[,nv\JFʑT-3!Wo5qBvm#%h$u2O[ MR@M/GuT8J)Y@vbaL/k+H9 #'& S7%[N>qj +Jay N8ujcRCpjw`L okmoK/MZ<~,gUxkj!ɩ,ځ&JXe| E34d)2IA" : w^}0K3ѵ$O;V>bU}77)'wchW[jX%rS2c|(ױ|mnY( -#9;؈"/Ld{/d@Jc˧u4]^KJͼu</% 6SEYxB9րMXcn}@m$yl̚{ lsaW?T|貒v"a3C((sf63?PFLFfSU"].HdY1LQ{;d\|nN; FϿ헿]Q1KK7u,Ft9EV_|tt|HI2J0D[M^ Ki#w(v";1hIL&Ue %99X~CH:HGގlIj{A-YU dEzwԋOaԊU5>;?P EiEPƉȉWФS:QՆHDCQSLQ./ 9viW{T@zབྷ.͖_Z{9mLpL S~0$\2V4K{޾NnUN*o>,.o~ac%boI/axǿE|^KW\pn*1]fPlW?yV˻_v^5Δ8C Wy߾9Ka%r kn4fÈ6F0͆0cF4gًmm8UѺވF7bf}b@4{#{WQ(ZEaC։VQ(c{lmC6oZ`,e+D`dr@g6i Zuhw*&qky'Zy[GUH&$NAG]TIreak`d7J^zjŰ~ U]>>l̐jpMt Z`I'?h(Zؤ I:NTA*IR /]coJD3\24gu협Y&FHE;g$3N{Dmh9Ⱦ|FeFrv]#{S<ʭx1젘OrkQn,T- b *e%*v]=nàiZ //#;H^^/hdM(8^ɭĘe%\H⭡1[0HUm(T0rm5%{8V*yC!Y1ĉ8VHvmJ F0ú1Vta E\<בuіpZkZz] *Pddr-E:PHҼãLV,Y"V87Dxw!ѧ9 5A@>9ٌ]v.s4FdlF!c+`[B=ݷ`_"`ek#b j2uլY̷q%#K6aV-"Q5(YT0Z6*{6Uhu!V(u \Ys,1-쏫MjёIH C`stu5Jض&so)a\зCjOJ0B`gRqc= D/fwl:$!ц6tH wLOWꮖhwD%]-15*ԃC<6lFj.hsqD#\13oL BIa E\*mZVnEӆ ou2SPr*X8D_&k-_la-FVlNp5!6-Ժ%7W>4|1 ВH M>l7]٣t:Jw{ L\;@]ݘ?R`g- 20@ԢڣĻEzTT!EAk,^˺%G'q>7[{ׇK{vC/^Y@wa7^jT)Bab6 vFEjJlTюصξBRHy W)}ֿnh@=Oa`L>kvW m9(6TP鐇O ʘ.w^_BW=]_bv{$X@erTY}VV+-a0VʶL[l&k-NSeL1o{2A.ҿgwG\8է)]3Cje"﷿ػ8rWR5 }Jn^JJmk[Zkǒf6n'gyQJΠ$9bk,v !sKǹ2(8w^Q w]P kiY8[g.]AJ.#Z" L*w9={)vkH0EV8gr,ݪλ+ X50z%sVIЊ28e 8+Z$%+<5jFu:N5j wI{Y3Z֠=-< 92dƳy0B̰2Xc}'Ȏl +5sUP9Gkqv+onѡAn< l^.xY "L|]in[zf؈V -CBt!8`rw¨r׉ȭKa%ύ8Bm?:qiJ!π8{&jcDs+qX!u+NN3k3*ڜ5c;ꎮnv-IkLGv[p ZN@0.ڭk!c»ew~iKT͟o>ߟR ImV@gqT9Iny꒭H$q"ŸW]RN\7$}EDQ#L=}Lɜq^=st;xQ`F C"X<΋$^385!ӽAhEl~5mX3sFݛ{W)aq1O$ 7m9^s,ñ+j+ɉkBX`EydKkAN+xa{AyxFuGHb}FsTf7MټN;Wyy{uJT{osuK`8))-4 -0[;F$Yjh-d^Dg!B%;;(qJێc7x'ӟtySjd9wi:_7]ZR(A;lTwRT{m7=K#2=z[/@Q*5О̾aocrGGǣr1آJlu*_B_A, ҹI%K}s5J*] 5-13q.csGJԤ5t̡uRCZe%(Y/l9/11_琾)ӟ FICmۡ# B4 aQǫhW.z,`l9`|DR:?jP RI]JM>{هB+[bxv~Jkj[u]IO!U蒋UzQ]U3&ՌY(wHrVA/PԦQuw2vy޵wڎX⠾ݨ%wk@}ŤPEU$F4GR -";)17|̭ĕ} jQ$u'Z(IQ~5;C"2ݻHۡE7c 3MHI8N\TrEo* o[-$KC|}SU+t =p 6G'wgV:AН:J JȖbVTRj* (mKt׳, xgbXB#|-Lޯ}S/Q^i/v~me%`IM#!e30jdɵYhϩ'E]EsPTҍ-fs>q!HJ)σIACAn]9715|rOcg Ar/AAbsa7{ob#AslDȏp8gjPc;^$O>\ݟ:p1O&+T ,D3!;QFu4IkJMڏYU/|{P{nțݍ=,*, | j<=PnGDZ(SGI*c|.ʀ( "[~7QP9goz`Ѩ;ȩ-ڑ;tuv$" SVkIUvщm@}wBs9d0E6V0)TeJYNL!hEAQ-fl㠝%>sҼ+VgPlpF50WE-fa*:j9UG#\iЫWg-Q\by,DԔl,l J J”wMtB5;Ǧn?]*'?=$S)`A=}t VP@q0NoL% p0o^qOmحvbU[ifd* \*dɡS{aB!weH%* >G;r-]'l2Ŀķe"_O qϪ^OS t=M&<`FTx'\?g z.Gf8,dݿ'6K gn#!NO5;'[e ,tJN!Mspc.Rf%>[We}zyf=*IE NS0fPGd5:U:P%j;CLU]TO#^+vvZ,?]̙9vojOe5#IbRV6]CJcaVxG$-BVQB^玔`IFJB%5Z$$/ncY4Υu*Ac+ڀ >loaQUY[E""$M_ju@%2K+IG:4bD.%GcWSo9Yq] EQ_\Ul޼|iO151T),Ix5o}ށ].K9e $Gf7JB06"^ڭ՝I_qO ΁1A| aA7tR}{SfPOc58({ zjjЖp&"٣wnxbسZ$gzܺ((bBɰcaƆ ylN7f4pĭdV V!T[HNb9 hD^/bBwYUFoƹׯ-Z7Dj5r~u3*T;1s4ng?RT6YԐ+HC.z(I-PZWD/?qr}u%=aԠ[xb;*7E捽Q If$(sԹv ‹UJQQ BE:,s˓RD5AHD~#gk%Sߎy2T25{|2VQ$SG2/n"ⵢ*æ~,d 9PDjJ})z<)1@y%XR~5}Rs桲mN"WGHAX5zOU8w&guUH­[ Z`7J2⋭qKa9Po?pMU` 9q$j\ҼByC2P |LIyWΤ t;鯾no&#OB&|>Vv4(MSڬکDٻ6n%WTy[{ПP>>Vma+5d'eىs6(J 2%"vXFF[g#Gf35'R/ lmc6MKAIɕֳ֟j9*n+N:_5SSllD SxM|蜡hjul#Ii]k2ƷT3LPXKo5bS@dj Zj&Y|x̻Xܺ'{移o24Tߺ5{<$ 'b*wC:XĽ.htm}ͩd^`ÐIu:lX_YF7&zZsqH:ZmRԾtm7P?:17nƑT%۾"bd3o-V_>~7X:!6M5>(~WEs6ȥQW%ٗap{RׂoL8D^kBe q"ou5:PvIZmYuJv樛jk]PBBK.O6Ř7+MAkV8p+ QۺoY4Ri0"V'񨦦؄yBqG+H2 ζvJ01hwqlj{ӹ G>^wSZ^&( xNy.Ҍmz* &f\%:sL3ZL-`^ h}?,K왝oE#LzEO%sBH r6ru>%d }J\*i3@ Sri}$t\˭Eq|f h2iHf2G 4dz$7TsH=)i{\*G ;*>y3rx@7#e@z$9c>]l;\ءs'%ۆN "Oc :!s A۸TzdnN B(gfۜtU6\J lӕ$%ݡ;&)1+*n ˮ+Ocގ55O݄a<勓Dy;J[f9\2uƳ$7p0Y5 + JpIA`obt,Vhj Җ Sv?%s5>FvIs. α] Jҋw1oإ's#M׿GAfTAq3B ͝IhΖ&,UgnrsKh&Reo}~kBE(R0 ,9 fZgdN*16!=;]a1o{H}Eb:{xѱJ 3W Uz[Ƽ.(dk)ɜ~6>_FUcN;8BNWTDP7wb*9??rRU۬ezLh|fe!8tbJj-%yc-t:;&Jڢ>Tp}=1|>v%?y;6y9=p4ݢ$seAfy]edpeQ qP Zy9q]^mu jYTI$oOG5|iZgi& ]yI91 3b~[V;n ]ގ rW u1MנG1f4y D%0vʧ\c[M*dcɱr x0\|ަ-K6mx}žOD-Bܰxz5;F|536;XG:TY9cq'=*ӶhvScڡ]Ocތv c&O>D2U%󌃨`p\3Nm9 2wĤvwΞGGqOAXT{C e1hWt3WHv[Qzdb+qI0e2As F7rzѓ3SJgZt8r ]?_TWcq ˓^Ά /|>_'\o{կM*_mެIC/;C}_M`;{ީMğ|ُje4s a|¿, Oo(dv?_v[P{ʉ]zjĦjn:r " N}/5@S}m%u`u*.;v;E'(lT:Ķj?ɒ;#$Mkƒ]UGcswȲkZթZ5a-KK!mݺCcp]?W;]zTu~qx1RX%c?is1hՋ L1ޮr2^=LJ d xbk@mon =xTsfU1t۷kbCӇqBtQ]+m-׾Cj}N&1h!"MaH5[ᾕ &b# +b[D 6O{/O-~S?lbAp5~菜06U›wȒG4ܱoCP }U k܇@iz{55^3huuuC?g%I_Ϸl:6ў>R+7<{6x@ qnw9'9]NjѾAh\VkZ|9GZ?V5ͳSՄ2}OGyؿai6VcVŪh}U+XobmnQk6Ve+SFR{J?y;TRrԬt $3s!\:">zv rȳ?y|\^RUIA1>|l{^F8A۶C<EK "N\U:{7ޡc҇}.m ~ЁY K5sm9XK5&3g/|3/+w#KpI#b*(x[:^{B2cv'v'PWjq"@0ꀅ;)r|J vp.YK׳|9(ny6D΅rHBVaM*VtCTIiP| {΋ zQX#bav>SށEv/Fl9(k0'&Sހ9z45=jwinQbD^; Kvbo*0!j jWV[x/ؘ56Y':(@Pܣ3ql9ȗUPݞ 5}sIj(|c=z-cZ r sa^<Rh+ jw&ΖCKcu"%C킊PBfFjʩ-#EF{c <%_P'^!zw_|n/S.V~uj6T5ڲ@Ȗ#j,*UPlR<P(o裌ml QiF6]CY6?-ǹ`Za\E)/[? 'c}UlY:IG{Gm5OXg: Amf+Uilp[S/m-aîo Zq3vR} ;f)~i>6m?nhwu^[_dj귚(p̭]mo+|IP49= N[ƾ:eɕd7I~z˲8iyf8䬸l%Q*i0@BDuTQu-'#h60ݱS)ebF =ThɤH'k_Di|O_Nb2 _WYr۩Ӝ˃5˯Ztzӷ,̨r(ƟvfL#-@(WҽA#Qu 'c b9c/A!I|}Ou}JU: ]>|pOޥ49iN?F-S'h$^fS`g4擄+^wS.'ۛ^?fo`8/(\|#Ml!m#xrGzn;t2Sv^?yq;t$6|OO=_=O=נT( Ǘć_=Qn#.Q{o >m<8$3oU A]}q}t֜pjR}ӮTMS}V>d⑥=,Krs J,(Ow췛6_zv8xHǴ[òП>[/JAR+Ye'du~3zO<-;mw>v>p;D8ͻtq4VhAJb_mV{uqJt|j=LR]V#jχAl>MOck[#!ǃJ.sIR \sf W1% QA*r#i@*2FPRdavsP^Mްd[iFuйiD!>͙~#ۀr.=`n|4b&v\+iW{], Cs'M1?Z{7|ni )"^ ϥB aRUUqa{7;nxS/(š|k>zx(v8yߎvu\yo x2:HLKir4+ܻsR}twϩLhBP{Ke$5 @󇎯*)tc)vW1 QmxMLjwW ]㠇C%841'FcU~n9| PkQ>&qt1*#HMh@I $pKfDf"-| &o(k%y=^9QmԒw|+pîc 4B._эZrQ|_2?7 K+Ŏ= [e'."%ك>޹H5IE"B;}„xex0)wtI_KŒTsU莐HLdև?jQ:@޼kSں\诒 Ef/ߟ;ƅW}?kt gǾ#]a,oBIZ@=ZxFHD5j.3zM :vjG0vF%bSTpzBP0u*]ڻ{ӔǼ1,>|lX|~N&|Ǒ^Yo6Ʒޟ wu4v2pBJʭ$| VMhtatUA+w:t؆Ϟ0'[% sp "O՜İK9DӀ3_۬K$~ fZ8q x~C"-6i8&zɻCguo3A=#^atgAr=[ g1pqk=nǰ'. +9J`H#&mjV}X3OH(>L~( gltKzӈww-1B]`r^Fa <)ĸD΂H IK]QXD0H@K-T w\$!S A42Q䊶 }<g\{gČ0 nxRYH+2 7ŭ/Bܺŗdcg8_Nrd LG ENd_ X&{f 3L: V13"j,ńeTh2&bfg'wuZYkl  -]zJC4}u8" A7{9q85n'ğaJf΀"JS%քs&@F ē80i$NBd6pD“Te H(Q6 16Ohb)3 Cͻ3YaFfLr+۾(O)N{ݒ%J9;z[?@7i!+Ewx UPJA-GQ$#bA)Z ƊiEzԲ\u<[$ 4w#hjj5(O>_~ܼW#zS  $S]֚_})jQ<]%FYpFw 8QMuoCKA# KAAI#\*;sI7>,ÂUJ5=h$yो&\ur.r5=[dJCIߵBb YU⥵R>,֝QFe~s\^:CJʝee8|V_5nړ6Ҩ[JV%3,T2ɲJ$-"59/Id`3n>@ϖG^q#hA W)v)PHk*64Av.S\yxmZS$ǀqq"?#4?Fd!ğ<ԝ3mNkNOK }w.\E9|Qjb4dqBktT1k &1E JcAq$}Qھ|,P_(%g;ÞD0k4EJօm@ki!EmYo0-ƢB%#z4,]ti(FmƁ Gq7%f2<'K}!U۠/Ķb[oҵz3ܙ9A,S%ݭVj~pcuKާu#ff{\RKʋ) 5[H[sOk~:QAf FUnbnO;};*FR:16c]\_ȯ8ﻄ)g'v47ozOKj?S4$6bD-^.#92!HM ӂ8%6; lR! 1oTm ci+p' )&_Hk/Oܩʒ/FXNR!Dz ܛ0䦆zg#!QUo z ]nTڿ}x_r^^_?{[Z~쿿Z~\ 7 VKlt;ws~BrpK4(FSP=n|o q>Y,"tgiMeu KfSg=R'.R4ȋu8x- TkH Qi:!dN+Иiim>%u%w*0zgM,yoXt!0/ -kZl<)gE@NI@"Qx%)-kwsqB&<$X: ӑ☼Ḛ<7I_v6%ތ05]̋ gՒFRw'X#U,fJ `)9{!Ɉ\Đ Dϡ9jGafR7f壓wWKL ()N1;;3{.ogYrjt9FIcm0jI(Hn4B2 ro۷ 9@F(4L*ҨoG%k?nT%ݮ&N%ɥE$ Zԑ}"')8' $"egG10;4+-$$G mOcyATamF& Q?I (R[-cB(C.ᐹLp'6j&i3oQޡi<8 ^⫓B3]g뭹\g/?x nw^_!ߔbf7KnߐP9ONޒyc_~F^< pms +G ItǏϼqn S ,<#F5-$zpR"!t,-)쭒acDq$:ٳL#֖+{H7WkpIyiB9fшf"83Ak1%SЈIdΞit"ۅd$Y8<6p(Nդ&Nw|S#/l~8]t;"Z!q qB HN"$Α)'SBNO;AÌ9cW<ո[cHZO5vWZg9,ZWk*N x`ɽD/"7B>~_(`eܷe2dNݢմLAEV?3/a|[&IE_zfkPn'Dַ^4`bƨ.3G&&Q! ./Щ>xd)E&?˝vG=,}DK~~ߵ#\Ve %׉%Eu)6)eƓna\PwEM}Z_9kbMxJa.{ 3^~璄2R2EcB=A@iVVPVL/fOC*uۄva)(NvfZX5Q:gadf͚w t" r䱾,ԁy:pVrǻTS:O*v:KNC", L^`QgX$~hA4WO ,GO8E~{~zDk %ɟ֓]7>RUm>Mf}F PBZiRNyqv>@Kܝeڵh%̶«^OvS Dw?Z)tX_"D9qf䃳ꂬ5b_e:PBc.RV:j<@eN=xͤ\W%|K ` 9'uh5>laZiv*J5jsg *cM+RwO>Jf\\#wդqַ@,d`:q/DtReõc6=͟]7C#$ÁJjϚNxfb4\"pT*H0ШPh RD`FK=^ gM8 ,aq|wB\A9&5WuW.`d\ T5CćB/jj}Ɯ*%|8o\̛Xqre>N~/Յ{S.7,| 1Aain5`NnCif{o>=<\I,y4默%t_|PS`[xhͽѺ˗ W Y S%?:k5 /޾ூz;vrEequM2 VT]`[z)P8eI`\})zQ)* ǩ)n^KƻWm|w_WJ3m98Wam4SL̓G'{'qNO6Q5zzRp08۳'C9<&,U!uƳR~'O$%B >k:yS C6Y͊!xZ||f!%>-sVô$47ZpGay1EPNFȌ[d$b 8Ќ";4BV^v`R!yLɿe?#)#/7XGs۔RC֢uBtk@K(&WFZVh !z-SB%=$@$!كqqE.uUBj5#U \o^;0Iʢ|tAd*!4 3r+ô3hPIQ ̪|90kqXjW$ǿmΡN]_srS6\T]|_ wE IF 3يHQ(r{!a @?KڞQ0À`6;kz4T(-hjugתis{Q[ s3r+UqX)=`jlvkhacn!&S6Ŧ|2@P.2 B6Vyv-c7gt8zqAXkdNͳk}rFvƭ1].ˋ!+$;_A'O*DRk%B'q?ɹk~:tqpk(8rqW D@ TgF&U1GѢDn~͔) S<3eG Bp )F\d&"'s`%yx&ƒF@@Ԗpj}QS ֗oh9fĸ ^^0*TM8R+KV"2<@^d!K' xP1h;^Y4_}B.8VП34fs7ST|9cdGߝp0vhY=5O:RL:[m_kx}Q7WvvW{?_po%YeAW n>;W{]ogg')\B6kiJpn-`Akğo` c~[:ƨ.3ǻD4*dۥ tzAd$hN0GLZK鿲`ȑd`1 9C\ 6{{R&he)kXӕUZ3vp*XWP%a]]fSZ)8'ͪg_Eg5,I3(V#L֐P:RuPdHa"Ն%RK ^446赊RFG֢ S3ā&i'F"#?iOFi#I,5ui8 jB.c] S[ksIq- [<7VUX>X;leXinԑf^{{Vz.`⎋/Ӝ_?}j̽ᵻlc"-J6mC†:æb 맜n47,^MѰuӾqs(.ہ|CЯ5ɧNc:Aƒzz#UĿػ޶gU?[/Eqғ iZ4\.mպE;A%ud"EJm$M;󛝝I'p) g Ƚ8u'ױfi@>'`0!C !2ƈ(0M#Dm&DŰ~l"э<ɤX/T9cnF?Ӎ2xr, [{.lIXs7%B"S{9*_ɨ~~2v(QB _ h鯗 ڍ'zFrQoKv9L~`󚕡6:ӗa/6<" &jjsTFIyJfy0 4`V0nmyeNʦ9Ŭ펱@ä89ew-U wNoy ~2w zF8wG+1#B?xfh(&AZc@dCheT BJ0~7eA.}r1ۼRh9@̄# Ÿ2mz툝mG)oVu,L ` ݓ4 k)jG#d 9yjzwnu1l[on5Sb2I9(Ƌn%Kg 7 n#M")*')ɉon6Βu٩kʞ>XԈla=x7ȴ?#jg =џoYJd*ypיi w?1a)<"3V*`wTJA3'7b7#Ίe 37Fi&vc PΖ`0 u$& ؖ ZK =o@TrK/_zi=_np4 #ax4z/;QZpAlv0*1j8ե.7\•؄J94%6QϟEݜۂawSl3Oisʸx]NƗV+ ZttFi^c")8*bXnBc@qrE+R9ΔenQ٨jndV~HW|=U$Zz Kx0xIg*8SY7 TubچMiR :ka1^Xn Bu I1<d%’ ftʸ;y^35Sr&y:{>3v,U|ٵ\l^fܞ/I=usy]ĖpRA(غVO[f7PP68q@ 36P\X:D ވN/\ ێ_#q Bw&&TȖP/pla?"X ~sE.j%\sҤXp3lF#DH!, SgB"D5AЉL,gQ K)OD}!hL$G9<Df4C?猬Ad?_?AeDJ[]$jWjݮ$5|Y#Y  N}?{z'IY -|aK Gn@R+ڭaR21ҲvV _k\tiF~L?^ ܨ6nnoX/NbbmM ؽ4ޗg1cFQO=YЁ=L) ,:ȫ֨G>|532zH6fG\4z4?^GQ}z}կo ׭`>$SfR8~5vh՟7nRF(gs>ݏL~VboGfWz/%qV714(n γ aҐ0M58:n0!x@T;]a"x2[հCE "or%wacxxsO SV'=yiϧ%,pJvfYKh~J%нn2i#GxLso䳗񍍨UƏ?Mqr,!Vx4~7sm[߽9ue1Sf+Or^')W/~/ B׽`XiNXi$3[s֜(]ZU(a$^惧F qaS(ɵ%!B,!q lCE ҭ>S9P"YϢѸwi!AvY>\ݳW0{ĀcΙڷJW/FޫR/BxS%jf)%: fRFk$0P$T!.Y6%2Usn&cDf~c^cчǧO6N} T`dn=o\w[YgJL$zЖX06W **PUCu|XW(Js(&y`Zk!eL>P5Lm:>Zitc&˚i\j֒L{%%pĿKh?-`s'5YɑjSOr/ky6ue(B_8JG((=eteU 0u6 8X00SD Cx Q"˯@QQ`XΤΰy VRuͦ6,gMprFe'z[´̀^m?)hȝR,઺rsŐsqr]19ʩJi[U /)HҶx'){s3eXmmljz'w)=u$æ 뗺ITv1Vqz?O*{qs1z?)~ҷ vqb?)҈ .P+ M{IŖ_) \H-^UNvxbq !& ـ9(DSL#=EXLk:つ|XlQ<6y&r l8處GtQ+n Sqc^RZGFemF~m6)aH{110EkH!$Vgfs2ږؒgfK摗ͱa3FQ kZ=Q..`|X `U[dAsYMZd<n]-ht÷, Tm(U ӔV Qm m+"RcU;PqOyrIn*'I҈VWssE-QQ] 1J3xF",M(%PI(ba T SIr9JT^9F=Z>oNF%?_{F<13 >^UtJDO{{<>^_:޿o EZatq!#0PYv-"0nߞh"<9es+Li!T`;*k+Bc( CML9IWaC*h 8b1ǛZƒX&^ՅX:!;$WJXk$86"{~ KnПNd߶1)|?yG5#1 V,( qbXh0Tsz/\ԵVr{+աBO\wtNaM2;fso칣">g0CcԹ$E9)bl& 1Cr:dIEwq;\z*bM\/v) lbIJawgFRϛĞƮ->,VUۖensWhRSS/SvNE~:?j.y9_Ka'97Pl<شNk*A-^A~IV %KIF/HX67H[ I(1*6=P]x(gzk4iZ?5Ղ6֝u 9@&WsJ s(5T ;i#_CͰAJy6W`wX uM[{1Bc`nX9! f"zAT8SeTIlETb*DD-Pqġ)}$`$9WQG۲&EfلPve.ikc9yxy'K%RqC<&F)f~`:=0cĪ4% Д(@SM7 PCfLϹ2.NxN~}`lkvK" ո{fs k@uزc#K)0ܸZ$JrMfZ;h8k A64Ԧ(B㵴 [%b(*Qb+dnkRƠɦlAt-!lz!* ruY%ӔTD…rUd ]e?VBp6x36F1z>8t!A[ƄWk(z!Ƞd-)۰C1jPQ$26!T!GM21񗰏%pO+LY"F$<^~q-!TbX֐ +5xMe$!cV2 .)fG\OZ`*J-i@~E*JdD]c&|"bjQ&3Ё5q^Bʋ=҃˼u.^[;Y[dپ syeqf/e>?WVXƫuHw?| 7;vM˿좤]|~;睼wLzqA/J _3}"īgͿ]]̧%?}m>GlZm'̌Ӥ-{Q/oy;-?Э勍̪1•;(VO N>v P)md_ݡ啥ERvVmܑ rL>QO=4@2ZB͞;W<&w}[&BL #݆xU6.fwɗ|^>,'wkgtէ%Ѥu<޳v~J5ܙyy@2 C '36dcz-A ج7Z@~q8TtY(24<ˀ*ƍVPCUcWЈ0=T5e4j[O5㰨gE ؊6x&Vh osfd*9PjRm+@>OE%'0zT'ic o'ڐю v$v.NkUxoAx% SM]n6~[ӝu| s!`mgWAMS&!{SW!=e'e'jcx5»^x}L8-&~(0NחsZۑ8$`.ҞJ贿.gw&X %¼\}y}oy_npعԌHeD(˶#)AT*i#)`O ׯL i'$n9BtLݙ< 5Jڸ=f2>*SBԭ^TΤm4ҬݡG>nGccS[$d<$ͽ:멧co,P-f$R 6Cߤ)B+&Z[*[BRu@;d;D-P(bM-.1W*OAޣ~%\;@L0n^ [Z8d$Bq3H֠l_MA! Z֒>=k&*32GXńtbO|FFfr+f@ nER=\2@nyF^DKgEk + O&=o.2Mn(e۫ӫP 9@W+v5ةe Thuy8v]9s=Y׊ zِ)̶p=o22T 9,[0'9ǖוP[q쬪ŧ7<t{QUNFUUNL\S`Or'/Wwf?6?Uꇲ~$~-`}")xBFg$`Dmk&Zla8K6B1=d9jXj$qcv&8`쇺`ռaG^£S4uNJ _]W-7e(J!zwYY֧} &#`Rʊ OZ 1VvDzQ%=j@MnC2Z9jm[DBi%$%ɵ>'/c 2:!}hsڶE5ZP]TxUn)q9sV0z^Ց![jLljiz8pz1U=fr Fǐl?ޫdPFt7԰ YU-uΥAPvDo% $i +9!ǡ.MW JѨ G5P4h4ͫ4wҨR緩rh +o" 3f a>N.:z<g,1'J*L9IulERl1Fym ךi2 ٵV[W̞e]1{Jd)g;Vp,:f=%?6S3uծf"bA#{--ƌ1T7F$,^f=P&jg4#g{ufWW~CXQVrOΘMVE{4Oа{z.F{T"]' d5`{PϬIGl֫R|rN>66 :j,䳄R+踤mF 6WbGo}Xιp¹ 6z3ρlL|AAJfP~Vli4e1tL;')sUX+Cڙ1]mH`J\sª˅sKirp8Va)'*oDƺ fu?k@e3Īyd=5̄1jmlӔ+%t1kNeDAip } :jG(Qn@Vƣt7ep{N ώY p >J1#'a>0]z8vÄ>*u礠=|K칷{w`WaRZlxR _9wvy=_֞HY8cvL67A` Ǝjk'  mD -)Xe5z'|vVi1K2A0 CMxvX_8rsapj^7dWA5WkѹX~!Q[K?Eh%ӻ/]OfNU_˻u6bUÓ]5[u2~yw]}yw䳸}#]ޕ.&8S.AyR `U |r& ;cwdEo/}8Oe<# “b5oڗ/oI_~jO/YN7ק]hea^T c0_W'_R_|:ބB=/OήO.ήKH]__N>'ab i0fdJP E؜ػFWA$("$5ov^HZgvSMJjIlQ[#Y!<փo9Uu$cFIh1X=IyŮ6Fr[|A6ɘ9zz2f"^LaV9d ^{L"m*s0$RVF-2Ys6;?ٮ q<2K_ z (kzϕFY=Gg~qH1P5}hkX0ݰ_PMl=瞃`r;=>JNqƄJ֤EI {2%փG@ֽ$K:I|o=w{hjK鲕=X}n{l/4'ލo1ꄮE&t WeA*!AƸLH Kk ;b{br8Uеy/hPa0 L4k0J#^zA{` +RUoe}}Lo4\jJd&*pp_|U_Wٯni_9:ٳƞoƨW&w1C7"ÍS!a]S`#}1{[5ڄul1՘&]}/ 'J(Eh+eqHXE:#Jں@rUi)I GW=iXӘwN@X2BFӄ5i4d,1evo&Yiqc_.صaoO/aW5mi7wtvvJ{g<Mqa41CI`rNZ2[i}Xd'IqCpVZ qk7 U?~yXkg,6m||1i̓7nrio;Nokv;g[YKDe[2B"ѲP Zll%p_{<2 h=C{*Rlr|ӈF⫵OoVbֻo5imM(ݾHr숦]ڨ mԱq%y|Nſ]uƈZvf͖?'Ok#M;􃍡Sc+\'R$V>yvqqDuyud#ZR/JyYXn}Pg?BloKv]_|_}s;sic}hl#zE81ɱͲc3;:5?Q1Շ9o޳gd{ךMV%~}맲Xon\`F !~xbq6sԏNxF\4uvT懃nk+Hӽ|lOT Ӌg?s/䭌dHzD~U#2|&x)^Ft]>t j+LwX~7 ev(f>|&{.lo~\J[+?xD:#toeUC7Uiyeu3Rzw A@.s{{kk{B9iNy}8_V_/ ևM߾+I*ӷyښ{:8VdU\KrDB߽Oe6wwb%G8YlnpD/\ba;O;-3DbZG}ڐ_v-jǽt{휳F;nY:Ҽa|ٰT|?އn,;f2;*)fwCOL>w}WNg-V ]*oN0R;O}dlNCz} ǪGOZvzojkvͽ9/~C$%CM{~(p-MB}Pl;`iwv=ގ{fzzH|w;k͵{]۰w~׸,tt︛D^;3'דW,˪sRG=@S{4uf{i45)#6`MSuΗU=4,_߫8}>]WJv/,m80-b=x8 _QB?^(wڼЦBFh;miXy먓?.N5Znjj6;-3LKV*g*ϝu}WO~x~x4~yY|u\ڱ]9`t>jc^~ݧ`ݯ˷yog0q{۟ݦϼ>];U{/ZPlK_?1.>1.^c\އJ_]-o_z>҅$lIh`ʇuKP+NbYE7NXdN&g ],fjw^7?[H ڕJnH{1t0+h/Z}SCJ/f%U.hWUډj%-S"BZP;_|YN@ӊUMPu.C*(p_ㅓsNqJX!\uV'dR.8gBϐ7BfoT`k-,J4+]\-b^%."RL{G:ׇd'CI9j/3jJU];+ Rid_$IFN8X7T /"Cj!CÔ>][Qpm5aVvTdjϢ:Y.2K {YD=Y%F$> .5R{6竵aSW$!C sM" IۚXEގ)p1U+T2SےZ$v,gQ֓T.f^d3(,\I:GEKIp%pEPJ ORiCЀL…<($lϷFvBVWLd@'U&zW1(H/ C²5`vвsP% 'Q"5 [jWRfv P dBӨn@Ht\t7cF0j@crk#JqHXbGF CPdVMqš<#Lp`C!ȹ(؂23窥J`6@\}D-@ҐJƨ45Ґ."JV y Mf-t*x4XXiI[Elp`B哔sO Y1d2 _O)8nf9,* +ed=.gDP2.:)"2qI1~Xd\f@^A -V(,p Fj3J$QSm+Bu0uzg;}IW!1Q0C)VðɎBUmA k׸2i_~L, @P@!LfPU spH 92jܠFQ5k!y{x)`6nE\mF DSZ46D(IHcs:jDӈE+)0Th ț#-Qd4i5UV+: cL!2BR\aȀ߰#7AFt+i/" ߦ}(NrO"5hp$FN?o]Y;Rxa2뒉]]NzD>Y]!VLP[" ƇK/(yi8G(}rE6X{fE .W -Y dPv(v5eF3n* \GDMW%DiQR0aa F|- / "uQ@VAkթ)7VTlH/%,Xd P?=UĝmOyn[&sJ$' OVyM{s]=]v݊;FLEKR "MV(cޣP.%&QyH! x09Bը5m0 haJQS`@;8a @` %@g@!N6Q@CšQ1 (D+VR:cs2(@K"mR82G6 3jH\]!gM~2180"p%+y`GaQVvFo'/E#?$9+$;XIw ۛ51!j[%%x (ٴx<:!XNPhͥZ 7͟!b;a(F$uÅjRt%֏`E8U h4_.8 y-xZ5,.6mMuvY :xxrp1:{6ϵ-3FP7,n%x3Ɣta[X; ,Q[cn@Ddy$6dpDfcdz hR68_&}S0#(C!p(^"rCC*ȇ2pyD`\ܴAWDˑ ]FXH)!!9$Di'OKZ] Y_Q%æyc B?3qE"X҅)P*i,M4^'yVĵ1 )[ZctbmGd$d]aQYۖ{h53PmRҠH2 9^%mʰ$|[=r^PA`& Ir[3aGQKɐ&AUsV'k?]U8Jd}b .I gm;OpA~ÇWO^vŅKħZ/'%[T`|y{ ۖlqʹ]h0214pkqnY ?e_S}ڄsڼ+^SLB{ԁImmz+oJ*J^!guQE!#Ee$n VE@8=Thc,n,|wBLųkkCx/kʽmMJomEJcA'ů2&amF+ S|HZn[y9.[M⥯9p`emei[$l"!b3YċVjK NڦH+d N%QSͽ ?m 9T|Uv^)`8H t,A%+SMN6_Rj-cT-Lp6YuZLAP[3HՈކoo=8gM[PSWq%bO4P^}.qT!(CYr^d"rXcI |(cjU?MZI9K!is㼶mc(ITIN~"}OݸjȍytƝL cتL[!avBd`6Z _LT<~|u`C ܙ@HKC @!`@*w$a8,6gUd8BX*qrs$ :IF r="&ieL+j$;I,ՀIVE paA!L L(HЈvڈxuvhk~3 ѥGj?9èrFg.7 7s?"@u!D G!z ewL Y=Xt |<{ù0L~Wo cafW (_oJ5ٳ886 qrqBqr$ҡo3sC@xH03V%d2t6 DVzLmd62]vZK&Vi[{T;k[ZNըp(1$&[o\UhɷBD{\X&84hĠ+0G?7r~֍`PpfpvMx~ru{}L I7]G^0#ӈ19_/~ B~wnf7xU=wm#_2ѻAZ+oC ~:BՅh~ڻ9lo|[Ϲ\ߞo.y4yw'!S^.PN8_@ș'SQjPR)#K5k*9y:R;"kՇ$=@-BZ@0@ڝ'3@RC{݁6Cv!-06iepjhhK+8C&-׵ˋ0jRRR|m~pQ(A>>t_JA?=xa\]읏.t{]q}Iqr {l0}wB:89ol%EmLb^}1V͕Odrx{;^7b? qYwlV+\\0-[ +NKx^͓ Q8Қ6m qZ;}VnmEv|N~7a%>U{ Oy {&:[|*{{qO !¯Gٞ{?|t_ώOm4^/ΐ2!h[<]Nr?=?@"qvy4MX ;A:/^I!m̍+wH*k-r[ҙCd=qC{6zD-=|tO{}Vm16l={90u/g{蚗/=tɷ puo~|83#$ݴogEU7Ì>niq?>e{gt^wmGܭ-~y8xAl%DiQ 稔4@ULdSVĔg5H6SF {מ5:h{N_8F& Xl.jg@kRxm\ "pЀTs H\}k65VfrSVY*§`DIEJ.1b}Q225&TpLnaSW C6>EGm@3'¦ M- PEQLjnN ~48Ds*%+Z kQ\C-EAB J$1 \b !+gIP0{ $:2)׃V&YG7XR; Yk@EȠӹࣳRM·ۊ*0U<z#L(׫x |ub$ c-xXBj1%Ba[U i0.4jXZȺ8-of02B3<0_Ԕ7H/U`6@RCe i`T Xy/"B}V ~ z)J(Jh*ixȆi6,B"+sWj } g=!3,(R.ො ,nU2s2"%e-yo8! vwmIW~Y`ŀb0 FV^tSE(RbI"*V%yݭ"EFdȨLf#T@sVhD;Ea!p!! yX&MQ(?"睋 >O ^A ap&l5 C"}(>j.} 2`h` B% a+ p?s 92r^avtdV [9+e()]AT@A*kڠP1m)2UѷZ [iF 5"jV!ȲRi e%=H E71ba-h% 3Vg .hu-FڻQ`,1Oeߢ<!S!*[ CL?Sia`;LX\%T0t~bA^\@$l]Ddu$Z$bPKƀ)EI'nP\ eagb!t"$U@k*)jr.:L&0 ֨ `f1E T֏ˈ{+An$LLzttK·I'9e q d)AhL_Xyƴɭ!t\N'+UMS:Nb!f%-颅 #I`dlt" {jQ(bf1Zg k6 (1Udhn0@Gݖ=7fOQ  N x )Rૄl>膇dsM[N,Cpwi0ID@yB0JD_!Kk=xPUz'`}f="ÚQ!|^ dE^H"y>f/So-:a *ZWNV( Z1:.2d˃>f0tSQWp-`5#[r%ἠ?䣭?n*\ܕĨ}0kk1e֋}͟ni?Ro.C]7xނ8X;0onf|W [m*FKi֣e>VѲd[F<!=~,޺MH ":ϝ3^M:,4CxLg'^~W:3*ĻX$ـk h_~!AuiHAF#e3Iaȃ݄x:i|4gNrҸۄZiN? r*=%F钭"}}=7Ν9 ^LaiBѧ*«;ۗ#&J(s"#N"> &IN^3ȄXO^ʽ3OE9sgyE2ƭ9qWY$]ӱN)78)sm.QLYy-K(n˄z媲6sOF)=Φk}x]4!L|n;r[_Ju \*^FFڿmCbN3}7֮7"7'ќ/gӘ~T7 Y& @FߖҲr\L瓔dѱ⤣#βd~^iIwrcdN'^O]*5c)*bz]ކg uG{νus9 aO+Q1|n|§S TxE _M:5^}Q3i!,)aB%WcNⴤj<ݰ;[A^_=у Y%NADʥRW|\Qwd@uk{8tgBN%3!b8uʄCLϩ{{z:Rə֭҄= Rx\F,SFxT7|ZdSX.LҦӐ T:d%D˃< )}ç=o|%Ud"[JSWSMGR٤e4)ߺ$TUpTjLyQ<>;P;a6y:i׾Q:vw,Jܳżw&g|*?Mp' m7!6^z6:.4G$^>%HԔj憷sXr}n >߄@JC2޷6,{b-}{,KX"܍ԙy\LpM],?/זxv_V_fhٹ8gB(5No"݆+̸rɅg/ x:8p{饑Jc6[!rzqU˰;Eus֨ɏv2m/'XE].bR 66𶢵g|H?̗+ܼoɽ̏vmwXU 2a*ԃas|Mc'+2/KVs[TzǍq\nSFZ9R%V ۸Z>y?2h~ڢly$ogW>>TBTB3Ec`ۋQ1#Q9Qp ?\ZD:t,Y4\N/qmjV{8h0B*ۗ!M5ЩI-.XɡMy=pHTMK"ߔ[=b *4?ovOy9<LaBf!Okjz wh}+R-WT>dӦlֱAlˆ7u&ïYޓaSVZ/(= WEi/9^~lC;o5wzܵ< m*)i}ǝS# t9]贱,?p2ΰ#рrvP+,, 8FkUq\t1YZGl'I %ڇsfף̷ AT\:\aE Y[.KUu^sc^1wztgz5fۧ|v}0zh#ᣕB!tnɦC^4GnEE5h{bۺ%qf!F( 1"K-wᏗQlVთyPbJ WcU\|^m!G+qGZQK:65Z\8kK̕Brߟ;hv|,|̚:.@y96W .MΏh' k{EWZCD]4qVؑ3(wPAr5ߎApǸ?BilT֏Y%_R8Ga #ݞic&H}f4jc.yeJuS.G|(ٷlIʛ:bzӳn$|FQQ?wul2D#@UC!7Gb,"h?Nl8{AjL֪QW렔sͦP`pIYkJ00-s^r93zE7Gy+̀8\%!\&Ek㏰86>r̊ x5ΧϽeOzr>iBˮRLn_ͫ0A-W^cuytzXmHsyFkgה-lαc 7{++{NiwSMhSň/=D6v,{^ZnjkᎳ'=>RmP8_Lzhd8 =z:(0\.^{+̘M>G}KI9SY=YFEi> C /qρA6<3"}ILId)=aPYWWU݄0Js qh aaezhD F[~20,D-n Gb`\$¡29nr Hݙ;=<.Y$XE Hc@ocqUDSlWejN.1ǀ.8y 9vG`Yx$~o5nqRD5qU5'bz*FxVlA"+&gׁ߬d3Rf ! V/Tp H:2OLB:P8 \qjrWx9ӡ?Y~&Iewʲx^c $pstd6[r^(Ct] 5pQړ}_3H M9 ٍ$iǂ mgr1m>cõ푝#8Қ!vf?_{x0(BߔVmzZnA] yeא:*)Z4$ $fj>4{}T'"<4<睙sA9y#1 N:w?ل@Jzw$y!*8S9"=&WbtZx<۲@BS7S ƀݺjRSG-liQaH綃)WUT9ݪw<\e? Ǒ !0{T̐0X$ *&p>]s @o%(n^T~ 4(hބRО~\##69fAG{DŽ;LCdhO`` ̈́C(HC{4gVOT}{s (Y[8T}/{$d(fzj5I&w(H uZicfD |l3PUGG!;nI 972LYVנ;0dg#+u1v\ uq#8vىH"AV(z75;@2R Nv20)_nzg7H¢Q}^uQA#= 1E ;jiA%P"E$ DEjEqAӔq6L0* zJ1K,_%͡`H FvCa /'mQp!#8Gj'a1Z^ lygeUrY\qvV`v5?@9`/zs'pT^8q*&J R!|5UUM*dIʥSGwX pnCՖ8fy([h{5k G!sS1HA E:`J( @&(]n8^-F+HR$"2!Fp @&)CDR 2)A25>AL~0i,`:\r ?9k.qRs(e$*S _o"Q^VT]zYI<{x]yYͯi20f,u=T=03`NM֒"c|3 !hYg=X+뷛d`6|lYwΞҾa)/l"OI)wc9d$ׯ;3bQ˜B\ f05N(~#?~B %l`]KՋ+F\\haƻb\O_ٔa+ P yc0fѲ lLHQAvy[M3@A1GAM N r?gDmKo6m{x=}\k˛l`v$_2:q o;EC5TE1͏me}bNHwVu㍠r6ts=Qih&5u*sSjax2Ìw/mUW*q 餰˫va]^"J9 8ǚ#ॾEHV֟) 9,yэg!^F?:`@I*(YPJ65)FYtq:I[mDf+yV~wH98JEt):gww(螣 O ?b~~ d 1W #P$"#,^]hc9i%]^ywbaDt1Ŀ 0[GQը'pab;}N"$:wkȃ9}8k\P0M3T]%KQ5)NEO(;jpBqɅc 䯴9iYb֓'mfWY+z=/z?6؂9-I$" m{ yE" Vh x {uPg2jQthmfh4ZV/\f ݻ:[˲2zUIA_?c{e|ߑCnzňce/=\]ήixZl9o!{X햻v#nymjm\$g.NyT2Yk\2o)ғ][ Ö$;Qf{soJٹ!?!ũ]ÏU\&QNoEIK]ϷPs%X+4p |QZ96vf{Jhsb2hrfO7x\ec'ޙ_l"rD Gc :$\G@4!28qX+Ȁ e u0'IewEq Zt%[+JDB2 D3Q4V*JJ@Jp$Rc; YP¨.[̂0ɊM}Y HJxXcXC3/XBkVEGo˥ڞ(* K:]>,||Y7$0NO\/S7cA69lVVߎmeHGQeTȓ&x^jYMSɣӉfVq-hJiC!"G!iBhXJym2u`ˆBuDJL bΠc@(S@8 DfK{ښX5xMoQ"#W|:osL2rT;jDk؉(a 9C7ݙiGT+)ɜ4lB*+B',G[zݬl}.,{cyGZLvdDo1r4v}!Si譖|mͶNf&׵f>vR82ROq!񱡵̓jq8m$)ѵ8<;Z'ٖnǀaHo\\j~w * S$ekˉeWRvfI_|wz<{%_H{{5Bdla. M#;Jwc{m7+p<ࣲ6 5m' 6$8!)~^FտӾK)1Kd:vLmg,5RG?Yc <`5<0)5u jJ"ުawWհ]H\D;.Ȃ}pHcb>X TrΨd4O TXf*5@J(Y.BY: HܟB: V;=9m@:*zy\ Z=xt(VO0 ⾂0w sa!1slgRD0:=zG0X^-ɗ)T"+T=PL5%56&(b42%_L\Ϟ/_v.p(Z韜}9B ڢ>|n&z={|yVTDK:'v¥jKӯ W)!2*=:oEl0%A}3Aˬՙ!5@wEDI _m+ 8QPjʊ vP2D0սy+pܸ"Iڜ)>n5,ϦO/$"P'OMՐf+ _ ]L:Cl E;V:RL!:]H`k](AC@x@4 B1N+$o0Jeσ 1 @u3|GpM_ֿ6THL2! R^IXcBP3FcA"+i26f$geY˃lG,d^:%i6"-1.I4ri^MłF 4; ]FkG?3-y]Rt)&`:'b׺5 U֕_9~c_fwH1Y?5htcXe) 2;7$.e0ɘXB)ۿ$J`ɑtޡT 8`WGs 喋"ӏӲL$tY=9p\P@8hPTī"IPM>$֛~:+p>Ahi3כ~gO//y=vcfLtOXԾq5xh+I9cPACjBWל<[mŊS $;@ZUAE߽lT,N9e Qu:m/n'`Dd)pۻFs"W>95u/,]oYx(I>-emvmԛMֵO?r:CĈkZaAM ;kGJqJ:"I?hZQ?>ћD E oM8 64Uƽ7I-ߣYLtN[jRm^~Cҙn^ـ#vJ>Ɲ~ A#kz>{7}6rb+6mK)& :38YJit,֤ qa>ϴa[$'9 ͍Tеw:ԅ))mOϝ~.qwO/:oQr2y<[.47~=9201y,Td&Ajb=Aj$]TeLv}_Sl܋t.;؅F<%ۊ`X=t ނ樨$CIe0 4 )t&Q`0J b+lxS,a5R/Ww]m6sYεL렶Y6K"@|粊d] 3=XDz6<'jYŮoFA`B(PO$ERqR y~8[oV߂+E^y%$@@0IBP*Ʉ8 aB0H$p2f LcFA[N臜{Ԛ" ,KnP٦Ξa1C!;2]SOk ,{S!\KK4uL$c0yGaSÎo,/䚘>/7ތgo5?65?̚ 5[2˂uʓQ-j-'vD-|gN4zΤx-[f[Zpk~w8ETb+ X_ROݺNog߷deOޝ -cT(ܸ[+ Y&PDԛR,Ĕ X2'R BB>J)dJ"TW[~vk!%]:PvY3 !]ۂ$ OxBd`f &22#Jt3qJ0ì%.VmK 2!u:j +)D"0M x#,"P4,y.s&;o[;.#G9țʛ3oA<#PifwK"|<}fۅwpRDG$`_˨aCS! Z1A!^Ao^AbS Yޣ2J!$  10\yv^+w["˰"a`HHFf:xGP -__L!`pxyx=I#d?tLLJK̳/U[~6m0xP(NGu׿m=qr$@Q"}xNvFL}Qp=<禩H808$oզ`@ hPQK!mhXfN`0$($T@REo'Hv Mct[LI*GPrDMG֝F }k!v^?nQc@{UNի3QD1O!G HS,[@~&q:Ԩ1@~-fRDpzLgc q;tqQc5mB]k b7-7Gd 8.ɻFݪԼ#klDh%k7]Ͳol>{5, :1.Ye*eɲ G La` #C5憾.[~Ŀ[ `bl`k}nv(' /š)vRChͳd4@.F˳ڈwq">(m}L~q6<\oapx.OA,aKl Ηpe Ee)$C9?Fj)c_8:/K3xo0<[_׃2׃2yA[y8ԁUšl`;vyk1=>;ǡ5C`QmǰHg5beD HÊ=<6в=?xf5ZH T ia6@|o PL- s4e`HW]?XWΨ%#_֕\]1,.J5_odxMȵ.J@8Ns۹-f}`o4GFZ "ܸiށVTz-Fo'%H~'H-~oDHo'3Һ*lrCQc~ƯpMPB'(oAs}]ݼ !ȮQm!v as ^S-']|ᇠ kp{Su^s]`&a+W|YyMwwͿw l =ܣڼfֻF{oَўh2z^ +{_SvWfQQ Q'h ey/n=n>?{WƍJn*}QRJL첝LJ)%35}Zi5+Ee}g^<ɘ $QT6yoy[:P\_JcKy9EekӜ*Ze !QS*Z N;hT;SZ[ňR#zTۄJs~ s19זZzEkx1a зKFaނ>~''fCuǹVJ߽.{@4@0/iGe")+~` FaBp "%h ?3P4XJcd((=L0+nL S+S3 9K3ʓqrjoϰUDx$Dfu鍷|w/{|0S,xyʹ Rcn-hrU:$"o}0弈M1Ph^X-*c|{ЛM]Q&yP _&(|;,GOɧ={Uq.~\ F%W}Jԡts? WBʜ? [4Vm$.ӿo n-}~ٰ?]|bϊJɂ^EEaspJ*,Y~~;NTU___~O %5%g(A:H~7-ޮ^񐈌" ZEg SE#J҆ 6X1bCRoJ56(XhREXDoLHF!c;EVA\-I4VbU@9#w&6|!Rxz3}k:1?TF)̸0JW6EszSNn] z$۟_mn: KrӠWm1/oMB#DDBo|{>/r4';_~]kFfa]o{d x 퓀;d+T۫iM)!,zGpׯs{ E8'(⬲zl_<%D2/F3M˟ 0ůWi!w|k:_?Nd;5:ڝ S0L^ޗC=)|atLHv~c5XMu]m跻U16 Ըp~;\թS X)ژC/g,b"{׋v?|U_3Z\ͨXiqE[l B\SWZ$d) n(%ίm  >[jZw135}#zM%)J%Ao¾ hn@3.T &񄣭66p9^S# [0ߘp3=Uݦē i-M5ט[גi)&b,X]+ ( Ffg isߊ6,vu>cu4Jd W!jδةSKϬMqN(1X;E!`ɱBԅ:V'JsfD-]YC!NeGQNX f[ P }3 8i^hͣHEa.1F!ߗFGaO1*sM@@ AH1(q!(='cRG8?W2G {zRcrFR͚1a10:F  nU@yeukKNȻ41q##kie҆3o4 Wb\[?08 }FPZiA-Jm$gZrQ¸-LnnSdDQ;>liVjȔC`}!`l؈(x$M5K8PN8 }*D%* eKB4`VE2FڡI-}֒%!1|1%\p)lb`%C;"އ B0N+kcR-1 B$69)oA͝_qz3aBuǚ]2Բo#}܀Gi duڡ(8m#SDS2k A*c6>"I(VIrkbWWa5S~n(Kj@Ã0cFZGq3+$ҁ BVF#7%-QACls\"t :!.pCG[^q{p2gMw c۱1sg߀Sf\b6vvӯ/`W ƨl`>n 6Aq$`wxg8< )SĝK!I\bY5lu#yFƁ3>ka &FhtZIi&h AU`Jj(r6áV+uW5!V}YS!]ѵhާ!=CD}kdδ";ĥ3 ˃)rcwL+^W{zXǡ͌xmQYժ,tP60{鸻u-hEYR-uEӽe;B!{+D1u0:WWi94,)E*$uw_g2DSbs)]ls]zseZ0y)CNiǑ>:Sq=@^uB.ۃFu`gDyBkeINڵL@':?iBGI]EWaifsQ'YTIݶq>{UZX>=sJӬR5w7uVYUWIAM|ɤv~պs- Qe]5I4JuӵMƵ<僋_>s:^ԴYeևeClݹVwCdivOi 呙qH?bǏLj˥2i]D~4T y3sk\]]XyN]`fTv:ϫ:`yIC"BׁyV(,GZ+8yV9Vq"Q`.UxD>2$kCY 4Q"ǹͯcFqI\X'> AfN$ʶSPq<]rd=ج!h (5`4"O;\A&){&$%>iu̢vXKaE@PtQfyF1 ,7( K֞UYJ?rf|û7/!u;֦fߗqlqM:<rol^!OSvvxAit rr1].Bbzq~ uC޻(Ќ;! θK؟ľOie[1Ja벏#DIܒ%{n3jYJ&̗#Z7Δ ͸ nUAVJ^H#\ yFXϵ1)̑QN0J#C(j.װR;>@J YLj\t;LHU i_MXC$U paJ9V_I9AR2 6-!P,j!][I*$W&+hT'P0<6 jbiYYGQ 2-V5W1iȐ "oޗ .Gf̸#ʖRօ_JeWp))WB/JU>z7qanADWLW"g:gx,S/ .5#3_55~o1O-)Ü3$c _տT"F:Xa `|u$ 찌IA9Ǝ(3$&!5w$J:I5d.8ؾsxi l+ y%u+BYKr3ŷ y5Zz:jH + }M`:Q;hd*"r~#RmxX'1@Aj(0yje!HɄA%`@MFW+4r,2FB6XDF bcϠ?.2m (Ri+l}YMB2u/5!\6RN10Fy#80bk ėe^o) 7RdHM%bGJ95 8<'M30$D N!e'A); 9X{bR 89Nj#8"S +knH>ˍpȚyؙ,Nk0LoV$8jteK,ؑtB!'fd @qHhkErךc ŠAzU#CLa&/4!a3fAr"> p߻_Nِ{I^y^_5О}lVUM'u_D| /V6$>oKT02y/B3ig yx/y ӯ+! {WEܘ[s5M3 MEo&"w>lo)#>}*sm{U|*&,^10dxu{S5(~0,ERi( Fe{Q7EcP'a $-@RBrR etGz^jR 5US59k``yc`g4s">ѩ28:U1?3ӂ <1-8ɘT*.@Bj[Z c> iAƤ2f ŕ)эW$BB=H(+(_lL.Q{;; Դ(i JбDd`ڍ(:❢YL'fٳ9q|i46uzQfCZ+.ǬwgHbߥ~oMm /O|ߟ.mW k_;x~lpNף10I`p{…w8zwg<iHm?Lu67sZ3sm"ywl?"6Wa$xbdw,zv>Ie,{LAReG{vS5n1nw{"FZ7|t-d0;Bd3PdG,p}4zul(*0r~| mPv˵Ipk-p˜?Y(D~ A5H|6:ZH0w1j|.1HLi4D("ou<7(A9x)_ Lj\ :)9HA5Yf( eHNAjR!h~I"7Qpob07}P#anYQ)+}JBSUf4}8\Jt9.p.O2b$谖ʸcMSG$Qw@I6n6~5a4N93|}^Cߘ7bo+55)xyeV]s6iViak`$8KHc#5EKj2sA]8KWؘHl},\qE\+)妛BVnO:ZqlV&: 7= g"rE O^Ђ k fB(i ~[[RJ$gW&;,EW! ݷ+dZ5?Ԇg\kwȷ2s(^q{ oҎC Og&6 W]{># u89€ibw'P ϦHalM#OKI,L4`}RdI6$"yE  yVNPm`)b"k&+G9g׏$rJ"Ē|͛цg \WCj!"#Akv/ j@D] TVml%ޯ~<ևPx^H(S{Nlc=i}V7„aF*/d(W2iP} Ⱦ_J̚_.d?)8H9A99jM6Jc r8FEQ:b{=!1Rnt A55MԨ&÷ӑ<xT\Iq TWU:B"Fic[xOD!E"jA,z8ϖj$Mq&k߯EuVcZRi>2 X9BzR 8 $ iJEj JXHR,U$fI4*D@w8>z"A@Ⱥ *210 Z,JF6sIk፹Ho+mF%+ jppI8ZL JRClTԆ# pcXGߔmFj4ff)RZ3֊ ,wpk# l DרvH JFjGUO14Jzڑ"𦰭>=,޽Ӕ7!AMpE7Vب3٨BvS:&8Z0Z3j=kjD8]|q2*DI#.*ҒE7qc}NłN7qN>52x 1(Tt҅3"(Oȗ{a\lGz\Z ??>~ۗitΤ\8&n/"pqQpYQ_?IOCa4^W V^p>gJȍdza5ݧae˫ɾ Iv7R"jj U˩>!JbA(=~挳~ERAiʔ&DcUDd..݃hdE%)#(‘" B.Z<24QPAyư ֓){ b0Xe@5y} il FN? _8k(7ӘHPLrkJhWP9 ε D=gy9' + ,\0Jޮ[pͤggǑy2RlNYpk<` EF03\E[ ֙c7`& X'SV-pJz\X3Ul"αXsބ ;Ky{CJN/^0vݚ@$X/y$MXm5{.oۘ]HKS;DV?/2였$GME# ܲI9D4*O˨K1lE"~+ڿw6>W2G(GJ#/OZITjv,Cz#lzoDZWAJHìzJ?{~".XEzM&Y80^g>cO?}1:Un#..8tp&R}'5_[c\Նؠ#R}AxS449K(m ZYy k3c\Oec!iQ"@@|+dvA&,mW.[K|&/8U2i'@4\Ii,/ee2[u.`59 JӬ:v`YOW?)X.w8 Ł:k[ ":rt4ZmXDSby2sײ/4ٟ,m>N|ԩ,=1KHmqxUx^U?ջ=9OTqyG@.zBLz<;WZOatzq~/Qo{?q?!J0Ig< ],~ـ-l#ֈZM%#." [SheT BJ`16+mʯ]B E*}±ZL L~G2R\?Q{\=e!ҩ(Eq#ߞ5츅&5SmL{@:SVLi&|?acP{ Q]}ćWXs8*Ni|"a:BljUJw DelC}[WWgֶ39sh0&R=z!$N36YSp)$3ZS궦d2ZSZoMߪmuFΰ5 2z qhA7Dʐ(*}'#ݻ3!uW1=}7w`KaJXWձ[99|m|\sԀaq] w8XXi=pQHBq9[M BZC1ZjbBX*2r3@yV-Fkɴi"k1ͭp6Dʢ*xt tI?{q J/[NeȾ_TYIJURVűj&¦);[{z3蹁ajNw_[Ӥ~S iسTx|pybi*?C֔@Vz5j[&VXnmӞ8G-_QºLX0H7Q*gzX{ϙ6CVXW^ip;5)J: UƵ!\ζ@-OdXwLiBTPIN)a0?qs ~̬:+l/5f2a3yja}"gSGӊ+5Dtv4[[wpJf-%ll;55ƈ8u -_j+]uIf%\J+Ҷxl ˫|~uv8u-oW~;oԔX`[3 ^#[7Wfo}5x{y:[CŤqğZ;Wrwu4U׌&T4Ý@1B&Ϫvݦe^ldBy0쮤p)=>_i)*iϢK40|!! Ǡmʠb]Ckot)< t,c&Kʲ Y4&1f)uPqn4p!n^fxn5JC#'d-F]Jy ArZ Mxn=QstEAB_֛BDT'n}>r7/n?i81G|q};5כ#Tk^=PD)A,cr0aC-=)#tyf@(J⢴7hiFGw{24`\HRaiZ/9Ѷ5ǫW/,K[^^{R|Ml o2;[ケ+Λ ~sN d(!ѣ<;kݟ&魟:)^6F5+ Sb۷~eG\jkkcͅ!WHhr {Sn _%M9'H^ Ț׎eq"lM*;M! Teo%Qp8($0*V9A #WS- <ω Xx!`;f8TvUrQsQo\}k"Es2_ 9 *dM#;bJW}ٓ`= 6g߂Yl.'@OPI N|\]͗1K|[i(f;ipMֳ"a"TrvC-z(f#M{aYJ?<IX̿L&+7UwXц_FϕB]aY@3+$T.GlScƩ#&`+v@Y%ŕ,hg/Iㄑ?)*K2,f[m.Nʵ|s& \:MW_]x)Ѽ\9ipNQqϋdNY %[],F%QG.3Ô#rBb~q-HKYzWYQ1<{N./ 8pB,ު0VfARv$ {ھZj֢V|<;[} u:U4Ϙ2=f#"&*8VY_Zjsy`)t j<Q%J!uKիlJ =QRFLRbԐEUr%"Y1MnB@(BDGl+RFyUǰ`L=u8 )^cȹ`v:jO|oW}Ϳ=LE892 cVʀ-ǵq\0X]Դ6!_̙x&^/RsTS\̬%?O\~S4r3brarj, ̂[a(B)'$Pd5 < s繢|W0vaMn;,uQR/ze_ N%w~!1iHj8&9GghD:DD#V1L2/sfF+ AX`(+r@nQ#$L gj=n Ӱ@G`tRP/;l qT`)6΢x8l54#IJUw;wkDuLU@Xe ,c+Mal.a:00Q+5B( s*) 2MsUR_dbB a(0Hpa="מI0`N/[cDj Ï*)ƊÏӲbLb>N["zvwID<(B`|@_"p"ds dዲޓ2Eb oqIۧR`{Xj"ٛ_sEb0(Y``_tQŵޣ@W%HNh >[ A,:,`ɢMA:`JjEEo*PXDEk.:R,ċ=lĬ"&L`*bP9N1gl|SӃ@bLj.>Êov.a\ٕI3fYR98U2y{3XAgv|n?YO,Y2m-WY:*18{S>OKN8@9D!7oG9osLux-ƦKF/|qH#`  sM "rb%mLhA>\HkD. Xߊbj[B2  +1KDr *h# #C[_xcЍ3U %uv KA|X8#6[0ƇX- ܇VRs{?UAjZi; ZwZ"fbМ"W(ޫUȵxY%~wPٔ4ރt2 ÃW\*+ϱoL"nv _j`%\<Ϲ6PC ,L#V!7⃧=C9<={"!e*0a=*7B{`@۴Hgt:]XF'vuRA[>/`'%ʕ̐(cafdl̍`Ф85/Ifp^8}^$u|42C)@B< q9/!ia1rKIH Y?cML͵:H8I[ă$[qKbI,E!`V** c9`(erYFN;a܅V϶o>ձV&j#emM_ԙRZciREЅ*J$EK^5i ArFAigwڂ@A@KtA$p $5"Zȝ B)K5iy%[4p݂4oOR4J]~L~*>;1I^KޘҔ);]wZ5̂לKmjZ@jOmt-R#3ATSs* ]bgq6<1E"DLAEqt7 8|Nڙ|q LJC"CE] 5 1Oy d#$WbZRPl $1o<< uUZ0ywW}9Jy1̍&bS,7pUt*Ao%Iޜ^"$/WY֩WG n?vj(˯ӕi=n..:@slݏdΦκӣ.mߣOAn%m>>jKY)Dlaz/ϿyԊA_ź]shF72טk/n_M>CPi\q?O=LT3vw{xH128Tʡ͡A "Bd嶻SIN60,z,tbf ˲d h&D;SI;F7iҙg-gG-0Z18y&XRo]`J;0QoUԠwƞz*ܝ[6E{_ERх+^ۺ0ںABRs:ۥiLh]j\|XL Ɇ;5Ju~*nutR``" ݪM1.4ӧG 0F죶=nGZ cJšJwX-W`ˣ  Yҳ\fw3N}ǷvtE\nFp?/{8hxoAv?Fcln石ܺSUtrjO  ^.h)$eUӬwZw?FmG(m7qY}l4<>>>6Tݪwⶻ9)Ǻ;`ӛp{Ah+߭>j9VX8JǖeWVj;g5}ZLIt04A$OK b~{tGOUno$;n(w!v \+ʅh2 =ԢA縷:Dl5p:CgNP52¢y;O.P%}:d{d),]Ts30"=\[M~՞s`.!ivFLr5m穤p<.(eEf:ԏY ?{ȭJC/9yC0rd2&0xh#K>< o%m˶Zb$}0'uiX*EǏmޟ"60-i;P~*"tKJN<-iK ΅SNwijN|<Ֆ'鶴D{Ywz Zs>4k9t4ZRm;-;UY $CQ6]>Kѝpy~:-8ߥ:m'CkTe||u&7mbt=+0a֢ y0kvڊEV~Q¤aetIu+i݂Zi\ZOAK*>K*>K*>x&}m&Zmhd0[=8D*86AL%- 2 NRԛ.|9.^O8k=7CQEnC۹_ ;]")m!xCBku.&祿jj;gϾ4Y ބ>{=}0#x(iW#l/ *,QC0[i7n08Jhɘ巏t;-w, `a2,x0^K}-rwqN{~uM<sjk♍ I@(6Hg24Pb3 ! ,N+7KeY+YDPFiU?Bา|h=u7e(-+'0ʠ-^).q('5< }yS11c.RcI&0w`H&H F0>  LSsu{ntX H~>w3TSOC ǭa5SjYD keN"U@pK|7R@aI@@!ia& L`*H:0",eQ)*{2p{;p~腬֎1ub2Z"ẔA3悓'(, ka4!]SOVw2:|[ //YI[+sCEM•,`ķͪmEa2p`xc& B0;yR %+hеL}+fel|3Ò8!zK^#£c$JH4D䢅}XIguKB#%UyϿaE ߒ@9q sn9SE(=@h*V!BSTVx9k!R)'S.=D |&DSx'7XG1ӘIUKBu`vKy#Q#a(fD(`ǩ$F!d\I!5!Rf3 3,͑,(O, %a\:69moVx fH@[2#2c"b6S0%mf<#.ݟ 0 -1rPcyԂЭg#'~"+)BJ@䪷Rq:(W0 Ӵ^2KГ-r8)~̿MBnr5x̗zdcG~Ex= m,$O:N0+_O~6ߟNW+qIg74-֗J4}K{}uagfw˂=f<+- C^8E8%{+֍jbP:mlba+˶[~nuk!/9+֍qwĠĘF2[.u&mդS;-u[ yD+M`uy\ jmlmetJ|֎L` h>ղU$>떋A䶱u (TYֵnM0S4SRhPWW[r>nJjdQm;%@lwivGk` h?7+MJ%}ZL jQ'vTNwir;Zۺ5N,Nߥjݘ>4:mcϝwh]C^8EE "6fNF$\]tXn婀wV"ѲŁZmoylBx&B[1QcŨ౥:#"w >ƁHI8&<!nAh\GB@CmI$hb" 8`S3!ܪl!i@VE@iD?yp #0aL =WX{d0:B?$wPSaltcb' FQZ%!ECh c$.B@%WZw-$zQ4(PȀȔ![l0X {0 Hi-L&ΤyLZ@(kB(D9#w NlW?!F Cofұ\yj*_:䯾P2}=ZY5;,c6XQN:KR,H`tP9)BzY@zb2@ Np,IQ[ !R +Ljb0b4JĈu!8.vYbDYCC)ۆsLRT6AI,}>OkYq#?۔*ZFOjQq(~~̉,Ec|ƛ_ק*xP~Bd87'TpZ7_WUloF#QB@W%~g엫>|@R0j..NP(Tޢy ϔTUU9W_0ksvPӚ](0%=SgjH3ϓ;e(9=]WL ˑq{F'k]sZk$SdbԄH>~hj]詮e: q2MxJ@˫4,"dJ #"baP‚H,"!5tdef`3N^S#L04tTXJD׈q6Nx[σ+ 86Gc = ^747C ȍC?Xݐ-lMo4{.}L* llinm-g[yp=cְ0B>s\i3~Ê[oBVnI}86 sBޡBLyy e[9R;ʞv6l0Dg֠uH5_fPxr.\{7ԆfT1>@N{ {J_ZTۛDŖeGJty@=ua0 A?j J}ggggUF.I_riitiO8.EH`c*Z$^ ðFvԛ%q:UG(k܆eؽ*_--߇5>L ~wSu,T|z.+Yޚ'8n^Zh5tKlU)y 5?Dzdloienn^]JA.9beJsi*AS˹1B 1h,Kb~l&t@ld#GlQAt^$ zN9"j@hn{lA~1d{{Z:fjuG^~Wc:c:_@c/R)DխaXPο$ÛهB!ĖDuWW]j|6ڨX2b"sr_3 T`W+ N'D1ժVL|**@>sCve%359(AS:EsziAYSrFN|yp.ݗ~ ,-H Lq:/CW%} ͖sB)H[˃0b:}2Qsi௨P"(*(":#[oc%ٚtE}I!ÚiJB?o^.mQ]%i쿎.31y4oWg5vgqL3S-An1u+YrRZyZ"t('Zj l=WV'6s$k%sZ , lה2 ۟}oЖήL qwgM%=W6:We$5⬷ zQlɵ(|ߒv{eܨ,w[J$|1jWNfZ[% JJ?2h{-yh4t '׺P]M˻?o4c]q0+>lji´_`IiR^ykW`e:ėևü⦬% hw(_nbϨݚ GtQGJVn{ej6$. dJ _ǹVB5A4v;g!H%MkּjvkCB"%S"SĐ"ι,}V9`ϒ/巘NLƤٻ(P.oenʟ&etsb۽,4kѴ>b,3˚U zNޑfdf JG}Zo'WhC%A6zPtFȉHED@9NPeCc|V2BEC kJ!}[?r i"j~u[?,)xr A gaJh^R(Śmuk:ZRE%yX$G3"%k?Dv\)| 8Ld7ӱI.W~hWTjf[(}?)zY]IkDa|؅Ej`Y%4KI?$U ]ƭ#νBIi .dHŬ[U!ձjxՆѪ g9ˠ"%;[L`.Cn&CK1—ɌR=g%#4" V]hm ,NjۅR3KV~{Cdשdf{Ϲp=~5Dki#C) ,/q!"'806pTUE鲀_Rɮɟ}z6rRoKIK*on); 9Bdlr-k8 )9RDeɯVՔ~rN_@JɎ:?]|ьɇGo∠FaD w`sxEV$I_c UXA+g,Zҵ "gR<{n2f9.@ Ш`N8JGPM PҢ?ŶZ=蠖UoX)D:n"oeMB e R>~,\K \yWno_,9<`zaxvz^"SȎ^h\a{":BBr`/Qwvdx yJ-w<_]QNa /bԸ*̚Zgg F_=GBp:~t/Aq?uvu ƒ>9ˢ^<߸\s\o[Cim;\Ć_bSQ>dk`u|n[q/f-_BJO):`Li~ڪ>_R9ZQ+w vw!6BV:J[K)R:.wJD:.p;b1bIqcw痶v  EeǤOFhPO,m~˗}>A_`ߦ6ط{9'(^ T 96BZꄣY,,8(8DJJ~J{˯ѿSww+M',P2Ķ&e]7'?Y2!;_mщy-eZ<9V"hn l,,6P+3,b+tHWQ,{Qw`eu%$# }uBgF'8ѷK˙Ųp:0;oLgpUղ3H]P<4Xu2,Ձ6$2]oRMd@E鮺?\cm4$qѸY]e^:8q$˱ xh; 3:j+v..]cj;AbNApj #+pWkXW@\@([Ӿ*塁t?mLd!ܦ~5_mWs[WSB 431k X٣QL xoL-|HcL?J)6%=+VE3y<>?_OףmgjP?_]iDcp=OO>Җ-n1/7 ZڮًQbY\#X5h1w7WG]v[SheT [ɔtsw%?= Dh6`$ "D(o~ $$R {̘ 2U.鐀 d8]Ĺ,qkۊ֒']?GRO_i:>g;ׄlB)k/KAM]&[\PEBhG(^llHߺ3 ZӾ~d?z}'OE _"`s{R[;H\a˱hsU[)$+c&#E@3G )pT$Ed6ZD XL$d6'(fbKQK, fQZNiё`EqA|cp͆"ep/~J{3vr)7^X.u˼&ү4AmXC|?4F _`.Aж>;p[/SѨjfJg՘i|"rbwLoiw0D?+y4y/7Ń3%'Fb"괰"xgݾ/ҹ0;p"C.)BJȴB"w:5&&'w]M9Cw2ϫVqd]E"@$Ui_x3^Et~bo{,9^/8g Ԍ#_g`hJtjcOu'(tQV/v F6]{9ԥ'0xo֡b$f'pZf<γ>\.\>E5\Gs"saX?_5nP/QJ Xų~p̜zO65=t/q ^6} ȇ|ttJf>cI'MI? 2N *t(5|)H̷q0C71uKr~|Q@a BIl ᠱG2a!neFtq@F>RەMR../Y*{Nu&$9ģ2ii5isai s]s" BAڻڂ ^>w9T%B@ N&ظT^azqm0ˆA.Ak ):@H-$慆`S@{̉6GXᦰEt uA逑"0R:l:B jTH ,70E+j _jlŐIu"rnvYUMX[CCMf a֞jx"yNdahns;o$ :^m__xrL*6&[>?,b{ڼsy۟MJ{S ||5JF7/XIS+>9o{`l/@,0 _bP&B{Gw~_%#N@KV?=x#c(A#.c5}|R/q*ti_ݫs;$u NZz2ʞP 0@B,gb QP^@b/^|iDW]~ϻ|- cozm8R0 ;;rY vlz"Zb3%d]I5ՀBVP 5#|؛PA!zG(N:Χ+ G3R9D*ZA, b8|HYH\aʼnTX\`[ 8Վ .Fipcan4F犹5}ʌ*,nKkҟF  .ڷ.kG1jksU` Y3MjpS|wpE$ē1N$?N&A .#2eZ4rt>|{;ldy93ɥnsQdMFY `mX 0d{I$bļzg12T dWeć)3om;[ktWzmV:z2[+5{gqU򪇌$gMU hYCq`DA"d˗C D(عO@O EN颰hJ!F*PE$D4WD1h"8K2 >CXM:#GP!_X[dVPAjcBJ2 R`sZX|"ɵTVb%u? RB 02 BwGTZ ^(@L_+*ǔr$,HN n k CB}idkZ,}ִ  [ cN…AZp[1@+v`PP .$ל A6P>@; P_(Xas!`* f\Z(ДXH]x2u_Tq7ph7\w8FCI$m21Ee/{u2F ,* ns]lƱ;@:1$ci%8.aR`VB4JzAϧhݒHDJDwdx*nBpFx(B:ʽtg_"`"w'[^lT[NB)TE1iC0Q KEݓy-pC/uM1K늀nik"Oȝԅm'MGZ-?qTgRMh:|^lE{ѨS-r[DNn#@,/ZA#pqaQBګOٌJX1d;^bXEqjQ%}]CVeE.ТJz:j1Ywu_uOw.ʻP"ȱ'*Jo:G:"xNԕg%Օ[cAW$5ع/ZDRb!Đ,Dۦ':Hr}bgdB8=NZ3.f1ޓB7&o;D@q5ARғW$7Yc?g6 .Z-j;KhUN&g= g5]4[hT!kd*C(?k7 (?Hc]ʿ|$U~GQ~p\Ή RML%( !(Ǔ }e =ta4@Md{n @OA0[*W/* .,(XFUԅ )h'e]3(-XyGK|OP{x'piVV9hG[q=sWy<$NYbAT~h=%N\ NP &з}Hg=##X䲗H5b6r fT"JbsPn$ͩE|6Va26JP6!TsHà Xmq3?>w㓹%InןD\cCwemI XwV)Bgbb!5_fQ5x ʔ'7Fw[!8ʳ*K$܌)$E̻NxNReD,0i݊6IqW}_ubqFr(+ Wh_p{MMBӾ07.;223M,{W58N*ИkW|Sϕ=V^$4\ʍ}{0 ԒBa=ǂSEX?K|s?h_؍F@nEFF(SP*u(7 XsWA(I:&rAR7R`L y}dX@FGKVFˆ#ۍث}CQW<v'^v.o9:"DƜ58tUY%&TyLN-z}u,-8vxgNeuS + Jn1<\6fC%Z?Ɔ8n/=#ػjQs)Ns$>j#  jtأ@]dw yN KW Kx0Z?.Tr+\ H5׀2<!>}&B)QpQO$ʶH%VTs;+Z*M!@Ԟvퟒ3q|HQI)mykRЙ>UAVVrO*Thŏך<Ԯ^z4w bύ;؁JVy8fk+c+PUCEizEt5;M̝*s!7=葪W*[q”i)?Vlb `Yڠ>uԊ.d:WrKl{{3ʓ}=MB.&qpӣ/yB,=B 2a|;gnÃ^q= \Ҍ$+ܙx[^s%FP3O?$!Ww zӖkgִ|Yޔ]Kaf~^ո~^{@Jv_E*:K5v:[7~ WY@3nX@\ȇZ[@KsC m\8EoͰq-SVt)6k-pFO`n$9QQLt̨o/*G_;BU=%|AS2q۩]}eR57EXiI𻚏~S_cI Ebm$F==Ū|W4qf^l I3b d2RF%y۠8ݓ7StttTv!MҶ&_R{TaXd Q%گaˊ5[Se~jm[=g7E,P)*ϢOKng3諕BvT8%QV@gwΐcq^D|-?X(~:}QPAM8I{&/e #p*te59m[h5E¾c(dKf -/kI3иP%;IOՕ:2% G0 qvc[2y*alov?E'_Q Csh}w?FKFq}u _`2×9,óxx'>(0T o^hp*>ITXCh}/:qicghf@eqYk7]z=A-ܣ^,3*s,=so?|(:N U ^vWtW%+LE`;^WCLoҫAr^ez^PBB Dj|!)i&h壟S)&)N[V4B˗qSdBB0[O(4d3Dd8ke^:ӄZҮڪfwDYnDA>"{u6T0=["R:@|2n1+ٰ==%ƬbNWE*3-s b$L`sӔ+EM|SÏi ݊sFC G'FÇ@CͲ)a8h\蓲1yD>}W㓙5`Ŏ|*^>r7Ça$ȯtO=|:m{&? &3kE6xؿys~޸1<߲QW 4=3ޏ9ԎL۪XA|[ vYM0Z@3xy*J3XmQKKCP''",.hf#LjF4lh(3!BPy\Wp) H0!f1CytPJЏdT +jUm̀|YqmL]3(m{14x΢ddaqT/;?0Q{s>] F7%\]-6K̫-nGm*/r.?Z Z%vǫZR ćIO6Y'f2׳>ݠg=04egݛaَ *ՆoOxQ .*{^+c+/W?w.i ^ne=L!N/?ps]t\ʏ{|ev_GC@n Amu<{ 0Y)"z3,σ2(HqLRDI:!^HJ5[ѳv] ;=̝+?ƫ/S Wǘi(PG[$U%A5LBIW& !%Eɇpr 1c(qXM)F\3FG+q.Z$T29%E!f!fV(\iu' Wk/Dk,6 j$NaR 8]D!ffӶmmj3VR^t_S-=e3)ݴsL (YvڲuL*u[ժ⚢ku MAuPg&6bDVZU;V YN ) QY;1ݍFQ@^/Mj"#`,*9j9u,ou%W{ӄj0/V^l9 i/sG/~r퍩ʶ(ܳUss?r.[ ޞН}!bϞ}AQƶ0-1O?)+]|:ć|/uuFoR{Yj$(y\ȨcG ҆dCIRj d)e=xy.%(l~cnȤlآ^[ FLrHԟupn(kUj'G8S@o_\Rlʂ)Ť_l1.Z`RLC*]g|W_]qg%&>(Eo^'zO]7$sf*BnUsw4l3jb.C~puo~4@c0[1f+m0R ҽ.sВЦuWHP-tIvR/<Y7r@`.TF·m+ UsU'Y Cy u A:䪊-A$;U~iaaCYȭL$&2&#$b dk}$TD,Aҙi ! 6YM5Nsg@ V&/E *u@lVMl7܈u;SL QB|U |aH +S`xbBt%I2Pr 5h*zXiP#f$K3HC[4If@N_5EZ2hc75rkLFJt5^CXw c V$ӦłOA31t> l7_BFގ\ 1R-Q?=][o;+F^f^eg2`3yy9l:N|`[Ֆ,Go4YɺW ˑ6x |YsiJWxBJ^֩sk/Ŏ5:Zp?fΩÍ j~t⷗U;q/j'ck*Q[1xb`rPJ!ıரᡅu6fI.kTi{!`Fh'{ԗ̏qhN<7 6F;f$][ }׋ɭ&6R>nZ0ig}q}!V Omzү}~;\wa/no(pj߯ @vW.=1wdæKoYEl?H]c@Їqwܵ?nja4{٩ BlׅO덙Oicuq3N_gͼ""?" S3q\-K ܎Rm`avmlِ+4~; i^e%[x%m鹔2$PWd^0iئrrcP 4S"Cg!#3h<>stNY̍*Pn/y;: Sf30B^HU)- ǔs!҇adJ$G%ZK>gHFUkNt GV:;e V 4U\V=R"$vTD$Kl\ L:ig)#3QBZ>Qڦ`y)Hl3vd;:!Ru-tH2 Kq"C+ %̵)@۩fe"BqjR&"@u{FeGOP[Qѥ`hVb,JRJIEUZ2K]76-^Y:[l)tZ;j$oN},| :e@,J,m)'KhBղ,MSjVa#U*œN:f=!胝/U9\&ڽS pޱM|E/E3^}m0A> ʓ.OkVǣ CcYGrag;8l!ç&΀>j9B~hQ\2q"&Sc{l18 4p NǠ-bW WW*C])gT\I+?Ω6L kYIUO[û8@*u)I?FDolY߆ӯQu-,o<*QҴ`b*ZB6phۺyz}|KVŁAg[vR&b[-Y۸U Rɾk#b[0L3oM8HDkMlKIz8g+tjɺ~͸لQD 9cCǼPy*4oY /qO{l[0 &up}?|<o1 -s><{Ƌmsv~7}9rp#v}rf;&DvƤNw̱4TSWٿ3͆lI(},|Ld|)b^7ɑq/292_J\,=Rd/hw"_J$:%yQ|~#$A9cˎH.ߟ]ꗭڏ!wR$jФ8J dLE_|tsW>s?Hz(U؏nЗ-8?%fpˋ=15\q][mA5-F5`䛻yt+.qkgn@UJ`A&  \>Nn3|Fceqbhך1N~>]X 2>]B΀[͔j3[J.pJ0P`̬Vģ3 a-3Zoe+3#9:/ݮGDaKiTZ#1cI???78e^.qkZL~kRnٲ^-&ķ ߅|ԋ^|"#𗓿_ږ殦׿]EZӷ6~M>l5U"RX}w]#2G?Dp-a+.O*>鏕ֆcIYkUtkn!=GY3@PF\DSd("-1F69l[z}6""S\«3n7KkRy#:k;@z!8vCB~r}*S2d3Ʉ[1E` ӟ/2Ho7'׷7?wOBgM9˻r8 ._I|d uf2Gw)Z̤S(.%aY[]7/}K7yd(өT{PRE2R$mg'X:8,iIt̆ 'HUT$%NT_$798k 4 aFAI]՟(R5PL|] h.|oѠ䓹 'M3ceh[(ii7щ@f}p￞U)hB}"_r[[q P@IGʐ)H{[*I.8l:]>%ptIG .ʫe0Wq o (! -4ÅOM9]=͓1BiM$ŋ)`hKC52)z^r8OH1r#5OtfB?Z6=v}{p9T}rIT |Q78NO|#(9=d-INNf3ʀ>\GD;61++UB39TU\U!d_ܖŎ5V5LN=鿖? +AW}%,xiPuQ9 U];x3v"={utPB/XQ(lQ7r˴>5V2t(FlbĹ}6 %_eA ʣvW7_R!3Ґ?i4߿}Ih YT,mE&e EM/sO ?.Z. Mqf'vxF>0gߝM5Ip+NRUdy HG(EaCFb+ebQ%bGkV(yQפ+ޔZp^ms` n1pN~MG?ae C:޾=^kFYz8o,Mm޳blߠ@X#/ΙukpM.*؆Gڍ+vd A7Or=:O'u,wTWcww,9O0N0NGT}i~__P#hBWD/=ԅWHZ.??XB`G;ER;T˔ mK&tCL0):頪H(λ Bv]3خ%ANy=IS6F^,^0YMV[qF~`j.IT|* 'PlZ*~ IsrD=Y.")MƓӛ{YoKI4 D='eH*R }m.Sm: @]qꩳ}3x?E;&Eȧ)^zÁNDu &iW!kq { 5iX|쾘$* (oCrv#ÓƤ{:by v_O>q\E(/da+\k} k =g _==U91(Ā2Px-Z6mYXck8o̠e*yKXֳ5A)1(؂VIitr8gMUuUՒbٔqjŽįzR6éa],&mS2XJ)-A#FL3eϐמ6w8sK)*V'= GG0mN }*J]JMW_VR`qOX\XXѬ昁R!<f3@i˵\3 s=0.u]JK41W"lRP!X6}?0Rfz޽*xaFi&ߍG7GkGSH^~ =>j4e\#'qNL JvQ\*ŏ/|{9QE֍ԮiB5&e{o9N5s[..:-CZSkW=BqtAIrAݩ'\5Rݾ` T#LZH+EۋÉSm CD4`6ڙQDt"D{`\n͏׬ n/,bVOb.6+݉?:o#PKAd11c`͔N 0.ZLQK@_%m-+HպU]b8SotYƳ+MKy7Nmo7{(7b115L5F%nZq [g5mɁh4; q NsTlTtE)!\i8YN{f"-bX-V3PmQ+^$QgVXrq^8':J;zԐ% :GБmgʁXozfO>`s0 dC0l^XzHIX6k܀+7[J)` kzm%x*ۊ wyM]nrQBTg~U{\uʭ%fgš\z\Mrf8;z9K-r.Me/ٲs;onҒ~nSʠUџܚ6*`CըcgoЫp3*Ug) [T\u+[`(kJgbc`-u/C݉:Q i{tו-1LKޡzqEmYՒNPSkIJb}ybiC6-(T 3N w{ƒy . +n?XXtvPMKJi5>f,QV 8d䔷ƋoīRxLx;)[mz<]z<夭.mN\5o7w%b /IÏ<2 ߔ"=|E WK[䗅e_8V*E9UXjir-Vpe`]+TmK(=jT7TnbΎent[p5.Qif:>ClZ:q6-\HʆSȇ}{Ȧu0X7֠]!1E ^=jƯ5M~NBg͇Y(s]*֊.FxJHeVW]5Xw|y tvyt}P@bQ8+&on5dKEPJb |-e]֔Лr.,]צmHgL0vt)қknNvKPtItY;Ϯyώb&cgO#pH#KD}fs 9R,U+fH׭?U wTiȂ%&XrṞ=a,`,MqEXdL 1E9M(u\__@E#-=%Uґ |_ % M65;O^j}c L8҄~S* %Ūk(ڻ@FжJ}sm Y$;"- `"l_]kc0WNb9< 0٨;pƳ^i^΄d |? ]<pp PBb.;`Lǁ! `ŋ$@f2(*I `BQTt 'EAnѧ|0^äWPw03i4`^ȡm*0͎B7?jI#FlWx$>0ozd} SCb & S+f"@ kAJ#c07&Y6 &ʮeGfGX6dlx?ǾB 쵋@.f-Od\&7>b''ڳG{'|6Чh㟓x8_p˽{O=yW/3{œ?Ox3ь~@O |Sv_L_/û$;dzl3յg{2zOyφޓftq8>|B;CW~C3}H{p}`3.xdil,'.ҥܙOEqm3 ?Հ/ƣ,Qs3D|1N9NQW'vlSs9W:tx.!/ N 5Ob.sKN&E2gyj0%s41i鬿{3P&=\%r:zHr?lhFO|8Hއ1}lvyfw?G rg9^f8:y'gpuJ8?!qqMZ%JMR0B~\xWx?(9|8avu-ghd4tOa =k_@'.Sc#@wţG7O@z/ݟ &> ~.K{φ㳿^E~7ȫ'''s/,'i*}47uf4o''g0\.kNO_Ȃ} ႲNOM@}=]h[jWC/r`h93 $$ u 88gsqSh:|Ca .'Es ǹ_<[s$UƉI]BjQD =k]~A/5_ӃQ 0fY c |U_8ރ)* 7)J})X)op0e񏝜'c'Y puLع;޲0\>Y;TK18nºTAt"3OCQdеuFX\q*ԥxg%:Jl1ԃ--NNDIuu I~uI(mՑe!dq85~{DX)7x=3Kk^88g zE ;%1ȶ;?CѳgU}`)[! l BAr˰HO!ز ^ԋvPE[1QJpp44ek9i N)"*X*?T Q~ޱeK0{dHr#zJu sXP/{G$@ԅUE ÞH6^y@d'vVKcόFR_^֥dX,~Bg uW .Of5xv3些l,; 4W梜m>=>iISOz|,<'_8~8ID \5' ݙ `Pb##im-#ڱ^0%xuXQ),wY:0%Jhf0%(aF C Cl]ڞ# .~y)aе2#RY1MV_7pLr3E$%Njs޻ū哲K IHEeotilfS{4 _l:)ჭ v:SO:uN?Y:2NssB(+3I:0Q:c#jcd¸92a܇FXDx ^( lmeA(›G"J}`T^_2RXMlzԳۥ:&1GEKͯ'ϯŏ_n~6i~~/oB%h[[:~?9ݩ' {-`;_3ԫ KŊ#/_ko-^HϠZk>5O:@~>^1{>:)- ٦ Lv/pYl4L5>#dk})&KКѳz* Pɵ wф&̦6a6 Yn"ܱm#$*O=ќe9$i jI1|e\ON/zY MpCÛN0/w·X(p} \!ߦW \^@f d9ern90m^TC |`DL͗JoϲV|sl-P=[9)ĶJVL[k*5$}-cɯ= tTiq } z:sjnwzNKN'Zo+sH9#\L ן| 'ﮦwY"ۡC.XdBgZ9#+Xvuz :G@Znd:kQ'5.R# aT.Ƶ}J{Q׵֋օ[ mHR1!xj/J;XDw(a2(]J؝^onfׁcd) xt$@P-q;m2ޜhOdnƭeAnK[knYKѪGk閦^υ'+T>wDνBeuUf+TbpV{EGeDZ+\ܢ=CڹNSu^jƼ:e5KA{WVİ 6xQ2kzj/wu^X'nsSe 4Ds(:S}suJluGB<6-N9@]͕[V p2[ZsV쮣vɷ1*3jkzP.?6ޒzJE"P-DW?zRx~𩜾=n9\Wjofؕa=NXB9HeWt0'ޣ;6Rb9#5\ļ֓d&I:ѻz&IgRkXg+nd2a;6rtZ=E^ xe~|}Y3O^@4)E r&[ލaɠyr8iIN;RsT2pz_1F#)X'+\FP+m t[Xxux88\*K`U^ zKݹ gHUP I)8o$meH󍟱T{9>773S["yo)%-jYHك &(jTy4c@'x,$01HA.8# I"S>XiVX'su"xŲF)}=|ATSi|HhMAd:q3a=Piι",Ry e6yJ+ BHhBH-ޭӕq07nz{"rN)l#Fh,|k6(sm=@݀؄lSQg.6y ƔP)XEJl`p U?4* F (APOc{Zmi;yY|r}ZoBSo`UljM{Wpjpp.k_>]*榁GV@^(p.ҨTN/o3MGF2wf8uұ? 뿟^ym[p<sI:[ U׵qU Hc%uzPs:n$Hk%LS}_ʗ\)"\a/y LpT];iP,)5]<ڤփXP`E(Yt ֔ d'#a𖫲=|](r ?9^1⇲^*@sͩR#+G61J] l8IyڬxJ6`$EZG#oIS8ɍ1)&3a, nO#dy[ FhPjuހ nTc 1t199sVa0);M)B4C|" QFc=l[5&%ihs?[?l 8MV˸f3Nb,JA`#%d4N$1ծTgƏ:)0K+⬭ (*GP- iXm')k p?pkVٟHjTE9hDDq]r> C(Qsr%eCVq,6T8b-1VS*U!*byǰr^LrM:zdl1B&!z6"p.#SJ'񜞐d1:+S R`2:J&$0 *Kdь NO%ՙzߝ_~y[rpM)arN$e  @ u9xvcH==@Ui5'2 X" շ7> c)VH(pU*Z+-uZ4nJҖJKMĞEUފwY:McD o;!B2!0~kX 'ae`DUA\); \Tu-Xa6CY5ҪHRJu%D1C)!V8&rbZAWo>RP\kV! R@ ș ɱAu+KFI1K.U>K>CNF "z%b m(2\%8j p@yYד j Ň| }<VQq'H2^g `6:>!":$!7E+ mƇ*~@0K=zR@* r|1[%K[%gIa&/gZ㸑2hdȢ6{n Iv,G/K؛Wzx4#uOOXN<,Yz@Ìq[ 3V犆<;M{#e ZΎ0̴_-HߍKyPv+.2 lX|f/ eb.\Zm3Z I${;\Rv~rn7(-oY6ݶdG3kj:fF&K,?(ɂx N\D3|R?/U{;efvQ Ε[M72ɸoڌg5ނFs* CFkt#mKCԸ~rKacolxh20%ҍJXL6_7.e-ޅJ6.4vn*Y aPɢsCP;`ӃQyE#PLJ|Wn\R]q XHL,{!<̶d/*m[:X'%#F؅;e򖻝KR$,wLƥgsjm "v!xcݑ"O>eOx-8$6zh.(CdwE >9țQy-+mh+.2`O.. A]ܾ!LPRT`(,۹(`Ki&mM70|Znn77D hWFG"7]| V0Y+uyEY\'w${4k[O6=0$d&Z'qq";q{H\B( 3i{ȮBg"mT 65ׁkq(Ҽ'n\g1,@G=IjqL0O[>4dK1n6FG椸k{g "tܫs~D!.F,8:)[j6-hH  26n]\3hI2JʠgiJv*H())kK)hΑgbF(wdTJ΂"ژR"TA=5a+zߋ*mfoעxCĦ=+7IK{gw{iS]OLg=g2AKjTh 5%%P]ښLِgw&Z:)ƣ9&i);ßx>aKW)_gq-jK9p2,2sJL_5P; 8;aTE͛=HϕyӛBNXU&Wˣ2˭X$L]GY,H[,Ϯ%".N˳grRkdViePJS9A+,;!bO:@4zU&;@uT >@+t^ncz.ܫȜs%xlBȐSÖ>-i2#NrM+:Z08{j1ʹ1ڍ`4pC%zҰfd@ ٍpj +M܎V+%ck!n1_~xZ~XSD4 \a~㴼% 0}xiQ8v^k#noC1>cp^1Gu"+V h,TFc Lob9iކF55[9boq;C؏ 4ɰ݀V!5c vAل )~N, AqT2[l Rna ᰖӏk܋F5{䌽ci9)V^6VF3ޝhakD'ZɿRڿWLg?36"ea {e\aȮEeZ Cc]^ROJoI]f& qަYB&ct=@ISGREJyXmրo'!jV=QP#3NFg$eX::Gb0 >ڹ A0Qmt$:YotN nq-7ALNhHGp2E!dSʱ1Q:H4Kby['j3_ ja`8}eoעAB~Fy3 Amݴf;,0a^z߱A9~,)C KyR2.?,%3lhN7 z0,o۹tWLzсa:~yj}C!X0;,j]l{983[э.:Ѕz&c=pbEhra႞A ](=;#"{JUJ9\v%]GrƊ<ߞ􁩒}5G?W9Ӈu͞sׄ~t[鈝Kyjs&_]jo>,޾eߑ$m"2/;iI˻aLi.!6*ޏbjZIv_Z-;"sˀU<IJ]o:7&B'RQ)rZ7_OR޿\f].$nNGs,Aa`][^wmy}O/O姊Y됉N_g!{^'ܦn<>bW>deϷ$fܦcTķ&0m9;jgrOFmש]6P]-ǟ7cYrqwFw˥Д*hwβI@*.Kf/<(m 21jAW.4|6{S]S$pVO~'s ? iXA(vdWmSesc#`<~*5 ni/SaF~e3feȒ}U@m*FذpjX֪cnG>g,CE ոq* [8mT/85Uɹ}xV~{Oo~O>Kwn&cU?:W?A$Z[?@{o8=y=OJ:Kz2^dXrMm/6T4Q^.a@-CrPǒ ׵7|8HQ%p .1[zШ_G>Vy=|r{ 9Α:9mi:Z2:09s)ihqB`{=te/Dz["#S:ך8w?g:``fo%4sG/bg>}ݷdB4Q,Gw񋮫yʹTnLTBcהֹbYQ1 : 'L'O>wz͖oZ=ߧ^'P4ҫ-%iŜ_6*JӾj}Sr`%pDQoKӼ|K8*{ g:[Z< M?E$`t .?Q[DvDfGsN^.FW)mo+a+XkZ$}~T ,9Ls_:-xDŽ?8]y|1GY{49\":SJg3ZLvXԫy&*ys!ߎ ƻs_%MZJ r <,׶ZrvC J0Pqt:d ֵEȆSSgd]y0Mr6E%NQvc?3CӸD'RZP|+EE=)z*E%A 6 ؠGm0biXԵVHu.T иё?LJxEɧVnl xzdžG엔S4պd.M:^VWQEoبt7̐Mz:M皻-5F\[5 HM㫟X3ȖG~w&}-ӽ> r8?3k~9*cvũpߑ8mM} ~VG=85u9)fZTtNUvgTjk|`j!];=6LH֥c] o۶+Ap1lC@[nq5hwwbwkulrKJ-rBŢ,EB֖%ysj:sxfvm",S6%|M8X+M~"="5+6OVPYxHx2 kf!yB!=)Qb_Ek$`*T)"X(.ElVch#'Q! )2G oz'G&[w良Q,-cΡMp{s ki+75GF+g2 F ma+#[wWrNi d*%\IZ;\ 2hR0Z }V)(vɹtGE$SL1v,#Dl+aW_%e//+!,H: v%#9sd}&dMY<'Iy.sjUAʋa)1 \OJ77s .$q  eNz_?t 9?b *U3]b1aV<](zѭ x"g1s;h):IPE:O]='b} eGr* kh!rQ"G9j6XqhR!))wnPwn̩nJN̔ތ؏_iخǠ3_e_ [V]2KO +ڽd57Q~),~j$YT?߁:w<7A-l$?=8ʒ:Lrۉ9" D#i?vLzCYmDQVdIM ÖV\2Vᰕ  aJ00On p72!QZP'T׹lh[B5!F{[B FܘxB-B51&"(jIdim-K^JlfZh~1&6p6+c@`m~f e4PalfXk| Щ @wl6H@71,kS75E 46X &7 5 5+~VB 6"C!nLm,Եw(␁f4w%'M oڤ! ~  bu#n{"_Ux始a{Eahn*9oK*!ZhJƽAZr%*Rxt{%XWV#LƳj?% p9J:RpH#b78H!! G軕e# o8E }! 5"aGMm_uk(D&g4xܗ5&h ~Zc̝ ٨p6@*M y! &FA z@4"0i`&Y]#նdKP@R %E NAES)SM]B6[aAP=D8S%E xx4QYnM&=& ڨ%Aޢ9JJL( J=GD rq EH;;xPkEA.;e1! @É塽GD`0LQC驐sc,ԫ^&-]_F6BQP=DVsod-C U} da'EݱX)Y64meUᘊ/` f `GWj+/6Rp_jBnM3C)9L? #([3tdq P/c4Z@rΖ08``$)E 5vΜ)*dܸ3Θ!Jq@\RGeT+Rwl9L 6!`~)EmpU0dG%aTo/M.-@qJjDk%nư . ѱqP Xy45|yB[RX˒GҙFCvwۆn<(3l`j?!5x]aaVCF=)5y =ӷkkBk#P]k彇?׻Sw iM!M;J_?'̂齢[mw#%VuߩȮh˶=;Ǹ)Kv[jGqmʷ^CXBXuBg!Tpo Mѡr7<aM> )bvtDWuo?#͸6ݯln"Ms'I=%`D"Ҕjfpa9_9~9B}d}"lM`T^nlc`ra%熼@(hi! @<t3A3u;'iX` jo͆_~-'r.BPSݱ@ZVk'GSϯLuߌCp<Q 8 fBUWA3tM{ 쮣=pVQ;%5!ah٦C? gTb4vBl#f1wJyAqaaYt{#}輳0"X@H(Xh2`d-HW9[XKhl^< ^dd 4 TҽIzj` _@MIɽG}kI'$T o›w!tk$ȤorCճgW 鍾.9]%:eN`<17MƜ8M%t CDl-1<þ(@S&]YO5Vj<<_j9ή\Ṕ/ vc[]>~8;wf-p2oTLT :9xt= &pٙzSRlǃ vGGsz~@KLjxSZW3!Qf_gl ْM#y/L֜ tpfB$ QQG[AdQj%pL#MiJ{ ٳ5i/QPkAq%zFAȤ.*~B&D iWr'Ȇ$@@:Hge'H"y͚ zcv^KL{3tmVC'ip;b1 #}uV0<x0V3u:#ܶ)`osBp re&:ez4zwF8+Փp6;?M}Ag <:?zPadz5W&h~O;Ί??rޥoQW8Rozx>Ijl2v2 ]D t1Aץt/D>Xb5|JPuk$QHx,!H)ʘ@\g5瓔:y=R ~?SC&?N^.+7r8iWnH>? @ϋJIϑJp sT /gè /g%EM?2y8v _|x绋OӉaS(D=ݹ.'CJ L\*k2N.g:o(6ӚVqjoBUcIp񍺪͝/K~ᗟ^}o:h!c|HRA4@}w~/CF!|C&|uI*g׷ >缡TEhK/Y?)-0ǣT楘 sw~) ݼ)G0|! 3M 8W~ gv7$YJԩNP~ @eaZeûkYXk韇çx^.ܿ#7҄>\O/z!2- )|FTn0]')}!ԏ.O>]{ԽL 솷_} kM(eKUᡜL ڴn8-;;Swa%g^:K42o/пGٷ_U1'}'UfrTrG^FeUoee$JYiizBO JW)mIvN}*1!^' >f0rGmᡌWd~\9dK?PR{KuЀѝqqYs{]2 ֕J +J},E:bx~q0׌Y%N`o5g$q,Rgn8Gͧj>UMᦈ6Nsgǣ!!"P/yӌ6 kb!E&/T4J^R}gaꗓ 4'ls?(,)$}C[T6=xxRS\1D\AX{+mx>EfR)!N)& AЁ~N>i L} ߔ> %J諨05)7 )^>Q'DFyoVܤ<3XU Rdr`W` uJ^_rW6{\{PF }?9M\\6k}7/a׻>)Xj|zY~ԓ ծv.+dcU\;6 GAR`J0b2oLh_ \W{#!LR"z@ Eb_%Qf6+?g|x4+4RhoǙu}4-W:oxE zT0"F^wZCUҟfogF$9qw_3oM]c _Ϗ=BfitFPV5#2#e( &1F(k&&h2cf<a3)X#56egPց5Zc}okWj` %Gja"҇zZŹbrVAU fӝW$FY03iiا) 9ljOwd'{h=.x6dBfmezm"9&$kh&1Ĥ2 {7{#ЊGF]嵛x.DHny> LJJ*J-+ϨC/qeȔфim}o>SaF)nT bUikJ:媃]3,h~P)0%Z6Dg%GaKN59@B$}vY$38U[q5^6VJZX*jՌU@7*u?XRJT6gQ1eAPg!r29뢵Ĭ.& 1xbvBCgcAV`yKf Kv@Zj NVgZ]]_bNa㱌5*y1 "@4 4Dg&%A+EAKV'5F5&PKy '|VIZr!65T0odYOoXd5E5YW%KH!i\C_<\ hzN)r!0997iq13'_L6uFT5UejJ̈NF2XZWY.eDHbW2rf2TZi9IMwd㇏+ISSwOPՙx#QVMR\۵`?еw]}6iՆ{V\jsFv?K]딶xN>ڜIu>UVldo(fa]rߔڵEpxuCo/RumgQmW+`iG;;[ǣק7?~Y;IFGAn_@2ɶICxύ}wf|GPGV:{jiYmǝGjqGjuzDpjdÞ`žT`"mѣ̱iێY謼 ~Ζ^vxB|pgh-&|j?VmRs[qrGjvλp}3vϭ(SN9ջ=1T^UQ/WRSBg@lY%XXdnBp3M"r(9L`%%20e^S@Ҳz)-&,nG|NܾiԀFע0LUB9?2_kLe"ra0];S&:Ƃ+[*&feF8%%qU9 HNjE 6>%Ȥƃ#x˃!S4^=A+6$1$t1 A\4Q3S/xzp竦x͈+7ɐy˩KV-Xiz'BEMш0DŽ>Eo"Ɛj(vM޺3Z$e:B=AFNW;mJp2&Մ~@PH+E=;?_]*F3)NPhUZog[XXXX5ib ΞĪO'KY-Iۼx { :B'$L$1Ӽ3Qp0'rcSDK!^qXnm0,WAn &*yo^RD!=Yra;h>Lٲr JЇT`eV/ `8"%e$t fiU01rsȒ tQ#j =|{ǷvAz57ONu: O/]9+Y^͜s٣x3[⬻_^nQ)^~s{݌iuU_ҡ<@wp,TWPRt}}4'Ws)B}4 Ѽo> R9An ]]u^djpu˗SeH]JLI}4:C.Uk$|=% RRꕻW.ZH}L2aQeuv09x!u%$G f&)lI6GL mz}Xiv9kNQ!Z6:Fbdaۇ>M,Βy_^LX$ hz}+W}^)~[i>-9jj \}Fnٮ4ݓe^lNyj5TFu6A\Yhf, ' f /ry1l49 CZ6VX+ػ6tp"Z|\Ku"Xm'AOVCXae'X-C3p] ϐs?U0^ܝZ^7;{.kV'͚1KjUAKUlXUkN"ĞuRuw]MQA05_= L%Y7Z}|W )&/>#Ul1P ~.BlR}=`3/@̈Wqϭw()tGf%mRѪ ǼM23o0bБf腔LG]ѓ=I$MJ+tr)z=$ "1 %L do>qҦl  40XBo,]q(F-TyS.{!w!g%!{Β 9Jpax*xK^F Tƺ &vNN)!*R < K%hvrdz[&t=H#q aTB!p#kHJjOZQ瘍Tԏ&2 :6WVVd,H>l4e;ˆ.KalA* /-;Afj^Y&CU"i|I̳N8d+9 U2 @YzC%mq7=,G*u3`p(qHܲ u$z8̜stKIlB0 \/#Vvc Xs)nu ^s4lnR>JlSn+KFpn'qFJyXW}|kfy!~[yeY`4S}zLbw\W@Pŕ*zvb袓ڋj)ҪzF~bYgme.7ljZZ D3͊Wu0rPv6HXk㆑- ;XܖMOLIIt;x^h,[[K=Pc'MghHU)קWWK} W;,KuA}i^w[_HY7M9NLd77xNGbXpzIJٖ1Z8LZ-!.F&W 7G& {(+˭M1r2tU%M0V7Kwfw@3U%xK| Ř\< Xq 4$,EIyKn9KN+`Ҧ̢k9#KJicjv@^ri}D!y#se}D>H wmms{,o*eS*3(K*9 .L.ǦAFOOO1qDnX"WH*׽2XP-,E#B: ~rm: ݇%`T̕ +!&I ;cNJά0ح0LR#"G8Hob`I"fnzňj34Ir1 +<3 h2 ۻt=d '"a XSx`I(!88NxGU fEho 3d*FP*q`. mBins&Hwra:,Wb5o3rzDx"S)IÉ42vװdN"M %gD" #x4p};1o"W`;h+H߄֙Fl1w*(O7 򨤭u}vu@ [s|Vs /P Ѻ AXمԒHic[Û7(|a>!MlԈ{e(b"^J(qk09 F4@P䘻v$+;W`ayFqiK[Û3;N$VxKӈÊ-KZ# !L ϟ0 Ǧ7}jk[S074*ıe " ϳ J]n9C{F7wPp.ܮ`hX`MZ07ZsZ1pAF^ZY k#MMvv)2s.  DkKr VD`m\5yR]+XXmrc6:%(+o:`Jl8#yfzo Jv/*@frZAeuAH0şr v-`P]hytCxaE$kRINub GkCrGke#OA9Rrdú|[d{}32g-A|GjOGU/SO[-nG=^\&1ƽ\3)մ ήpAڡEVו]׉.y}qb&ӛ Ii^ @qb\ׁqg2ȫ2%`] YE:jNՃMpG 2X󆶘`hc,B2D-G:J9#(XSb"CZ[>q9v!dL bxn@R<]%oG8{$઒R*V"Ld7$ Vc6-{n-O c>E}/GOk%`gy[NUx)"X?gBQCޭG:X6ݫW#~_HvT$;v"YD~&n/ܣy+HU!yů5/.Cdŵ] ܚq{r<͔Xt#VؤG$4,/GExwd?^t6 Z80p2SQ4s2N3iWȖ*_+T>QpMUu琕MՔ 3ƺja ѮOXVG+7;; v_4H8B.n.X?ύ<îA$eOqٮm"w咊OwƑ(~N$n/g/G9aLVYR/2a>#wo]_6s·еX%:,8֩a&{TߗOZw=G:\"l<׃M\N' FP'c*0qi2,<neyhA,Ezɖk)w ۓxMy0Ffx&5;35C|>)b'>tN;1Qt+w+${|rvf){s_WKW!ߋ3E>*Cd*a=CN)jAU#JLцϨ ۘs%|ՆT٦ӨݣXz!B" e,# Mخ̢1n 42.rJӰNu=xUŝYT ^*GfocJC޿CИ[fCzuc2"PH_s~d^>!3ZJN;y\!BoR!_ F&MOm(^w oPj82?|lLWe_ BR 6T$$ |JI‰<: CC$5DaX$;ϓśc_ p F8h:a;wr-}W~}wo]M.\ Q ݄ê-3zUVBO h]ѭ&fڒс*( XMzM 4ʕx)6E`/QISHV ڼx6y\T諥9֞#L Vw}¢yaSL>F\.5e0ֈXi&sg8'Y= cbB0nBdbIsA!İI7{2Exti_xiU20mQnf)/oejVVA^ 9|oth7׏%һ^*Gv'ՙ#-(#cqDΖ@8T6+*̹x`,8E y3Խ ̛JeSCK $M@لrP)'4Q׈vo Nt`H xiB)Ω ^]`͈oZQ aux#n3 Epc© UM{0[ܚU/"7d 7|cɽ4dM:CGþՈ5[q-2f &̄T5.fAj)91iD*_<ĿYx;Ħ*ͤr=jM*l6eEaV(HUEA$-'03MXcqc>2UBҢM"ZΣD&Ԍu, mw-u5c/؏; f+4x{wWK"S=i;??؊f%u#(nQ&4;Q=b%&ÈTއƦ eeM:4.!ߺh~Y#ޜ&O[C.O"IIA2 1antEC&"gˈ8$GH0Y =ąRVi80 yDLD8 cF2fT&OR *'epbbCFƐ a,&P:D@HE6' q’w)"̰R2(:D!lMLPJUEb q)PndX+Xz}nSVpHO(%[d6A(2\ hB9Ҹ z eslI0E(Ն00),N["xd(QGġM"udFLHɪ'(jy/d<0s9/'P>ܬN,#ݙ#(]:`ˑ=x|=,}hq8DU5n pL#rJ)?0RR C$pHAAey3)Θlƽ씪l/kAt \ z+`q.izo$wKl.VW\`%ct8?ޭGxTRߒW_d.Ci/vi x|Rw =B2]8_mq|ǧwXCtO̡\1Le;AqaFO64'kv'sA 㡝6,Cy|}t,ִ73OC/<c$)pb5{vJxT1F+xBˉ6UhǂlBU dwcFvqI>U(BrJ9CS#bcL:{/7$4_趷H;6!j_;TεAIbV.ïV;$vaVV˫֍Z2XwwV:DGXc^{[ C*]ug%f.G!?ԳEg傷E=0=D[ر.DuM!hm8z4ͯjD@ΖhqBom:?Lr1{L_Ar'?\YIݻFW4WҨž]qR0iZ- uyZ[xv=1=݊ԴqZZJb+1%m4 uYJۺ -ae2䄞P0qpe=idP`q4Cբ%QVUt-f.cy6+x.nV`+4n%l2QtuN6UZN6 ;VSCUJ&={|)y (,yܹoY꺏cUZ̆Qr'WZ(,/N+E8yi,S9/tTX5\}"iLeQJQ :Ms5&T>'2 @ b3S5 9'ٻFndW ,H[2@w8'A$`'8 (vKjYfw5`mubXhKoA-ʘ}*tN REQ*)h,v H ܩ, 7C\9e@I+%xICaP>rNsRi|QXJS-a  FR˜QCE?[ UT?Js}&ăLpiɀA)K*вJjg;$kTD4-JcޟYiY d[&2}TLm%3lF7;$ -\.FR $'O\SX{,hP:*wgzP8V=pA\~թ~>l$KmNc;|ܢ/~ }8o#fv2K̓O߮Q>Njao6 _F#j3e>na9<_ڀ5ɗ7@nG )'V,.)ZI.Y2U)NnVΛry":hig}k셦j.$ 5҉24&7x7X<6Ҍ@4wFb_X 4e4{r8vha9aoDXpi.kTF*h?ƿ8w\yߴݝ}G{x|Hx|iTŧ`Oضxx3%[zwpnZF*CP<x`>+3I!xJT`1kIST&,-t!4c2 hU`㠪 8Ei p%s%ՎDi R1 ^QS-T[θ ,[R S v*P+ 3{[㽳ZH!С jioޯuo@#ԾMi%P>EE糵]ve6F-gg(*JlRSM(%s9|fO@=eRiMc]5"+8\}8}Q*u"Ƨ b$]/bbN"A Sb$3󦹛"&PR"x(`kSs,/&Xw.b ^AZ~.Hc!{bo݇2ĺV$MbjyrjaK[9?l/ Wo¤Nŋ1(aǿ_O_#|`# EePwU;+m3vxẁʵw ݷ߿'v6psNʰto2*;`~#]1rO󇠻.yYH ^#T jxc}L[82ghXiXgڐ nyfw )ERt~>oq}[Ĥ7bחASٍ{i6?_)Cz l!Е](Ki=3@aD( [=9ּ]"kݕƁ X_! BpYyȯv[4-4.02!dht 5-l~S A`CX9pW5kzGz[B( FTsīЂm߮{eANRn"&ZZD* /LMc53YA,5#֙0fI$EEg (莇 jЎʻ+BL#V&#JBr;tk^iԍ73Dc@O* `tr!T&d"f(ROuVRW`ٺ^QK1/ ^ޠ)l :[q 5PbDC}_a|k{A 0[\~psʱ_qΕd,DQ8^јGيO!@4%k"n07Qd(3a_Xԣxl"C IIQ;9bMvj,;0 J8jR`4k47ghv`.yGӁ2Čh_qJ 5#>kp%5^'HQN>83HA Il=R"%!窔ZȱTD0Υ:Pr0Nr"U=)@NtY  j6 LZ& hy0j`pǦv'lDwˉ2(XV{G1Y:P}ʆ䱩0=8^N4~%op"]XXYIGI݁1wn䯺L)6'@+| n”Ж;y|8{VSLBNxeRʾ}_ 3 h[$GE0ԡfiVJm_!E;"0J}0R:t9Z=4j@"V+eq $SАN 4@(Nhչ))s 9fboy(4U9ýA&% }#%!z}.A#^ R9[K7W<Ϝ|>W.9ƚpcm¡'ђyiJdTю_(K0 E)q3Np=+;qJפ)tv2"Y$) M+w,thpQ&QV/RqN&+5604TN<.Z/W\[r2w:3bή.]xrRĹt%"'INEt/xQuQ_p2zͭBB[(gDJ& eɿz[8OsuIޞ,g "&F/K[MQh;|kZj&UAa\sKD4 pg8IJA iK)`яFo )7 ^4ה499]8љ&{KWlYǦjMcݖho))|\R)U,n I(P™!缔uO4o,s+BX?A,D6~SN?BTvӎIQ8Jލck4ITXK괵 0ǃ7y,)oH:͖ŨRǃjPCߴBd@8"s n5dfߟF)eSrABt"8+_G}L:J<UYfi,i 7[o5Hέ,.cbh%rt0Z8.YB L:hgaUK'HF(7g&sqo';2)ﳦUa)%A&UQh'HF><ٚ2`Muuk^4l9хmƿ =cu:_}wFo X" = !)d{Jʠ-uӚ۴{vM}tͧpx ҵe>/f '^}py6~WÁh aZ¤G $9 a6c+Ox gtf!MFt۳= 仆&^Xz>,n(Ƭ_z/]>|o0C+K\^zeR{~|CD)0av^ٌm*kֈxSAu p5ϼ8K~φbH#>JF{e qW32jhrb:H5n;:傈1&󚪏Ѻ: g5O|&d=%ZRfmm{Y%Y:(%bʽh.-7zã"LrX39xzp]O90CdDO֓Df8D;Ɉ.\֓1)^Y"A#ޘ*MzofvHW 7> 6zg<~vMJT(d/mjBkcR08./ۓօm첦5:F!?j+&B61%*.QW)f (_V>yvrr&灃2p%-(8)Pj)5 f8z?],ʾ- 0%_ϢL-Rq$ZPq2梂&kar^6 5C]M3oFWvyvlr%W-Y3h}7⛢յ_$Rͱ"&Se:uv}Ӫ]e,N F~Wr.-E;`ֈ ^]q͙D_Z/'F VGaMz)elySIgTH߬#ircy"LӇԸfc>*x~'-^ѥegtI-ꌮ 9`s&a+O?wU'%$c#iV?{ ݷKjxF׌9d3gFGCDil,BBFs6So1iKaRrW8QcwCa.|Ѕ |sC#ΩJnp p}=\mQM8kG2*\zWpK;6y-!0W8HYٞjJ9ٖ껠'ٶZ}Þdd [볳񱾘&<&Ӧh[tQ:.(y*}JmKmEZ׵ڊk{iqM +^Pz14Yofnb:u$ b<:HšbflBƴKj"* jUPvys`x Aw et-b8\&\.j8.?uVu@TD g,ȵ;{]P \n}YTZ h"^ Q e% {%QҊV52*5{m#p_SCF)ݜ, .dړ,6NlUpcY]?93$ӕ}(#Ȯٻ6%u7tF[~Ӗsd籕eQ|31!{?4 |8y5ّ >T*fD*@mީo{(+Ħ&c?~׿lg . ]μ)gC\fh~9޼J[x޽>O_]~UgݻFCļH$z~2K_m¹`9FQabiβ1HlxCF:YjYָ2ãŭ /6δI;ǏLƐ Y;qBFs4j񐴖g~` '|8'~ghRp[i f%)-ϻog/ 7Io@I,LP wl>Xܯ ږ$ۉ էHVF6o(0'Ԙt.- &O x2^NY]EUd/CJ6v4_ޗfm3߳Jo[T_8Zis$=yP % n\0E'hg+=ȧ;@8m6*oN3 6e"#J32.{Pp%&ҥ ٰNTSm M`ah[io0=o~'kë 8dQKDxkh;ݹ;'#O<\unPR ΋g{OXoaCt&fý^|F*eU :*Z$4 QHG[dt2Ǩ% iz=jRjړkT{&3Йhuf$W2@&&, 8|oAy{|0T'\)jgse+y:[[5ߤY^3AC uLok0Yp3V(?w&yH˅x 벽 8hx]?:aqZl82\L*xO ;Akx$ Uit.*޹ysBۄօ^;ד1TU4^>w.9jejh0߶Kǫ;Lw=9lM?>zyP|ZY>SPyѽsn {fa ,]HEGv, ׿*_ z!QXǻWoTL^t3@oWF?P-[Y(x#%3cf*gno5Mkq6~Fu &L6!°m?$h\qQ4.4#ݽnȚQRT΅dU9R>DF#h9 bOu>?<_};^)ͷfόvqZۢnγwyKZ7ՆGpA\)>RlJ86 d.dB Sa ,eBbVS:0n޳390^B>-s}M>Q"8`6`,l4J*R=ΓިVd1xm@A~ @n]o-%m #wl5+h 6hF++}tQɱ1DԸLlo$ARa1~eqj‰nצ. y;Ơ'lg;1M`,Cˋ{4vYݓ-rr6Z{n"n ΢?mE?]5WӻwiUeKBD8Z N>]^qlOOO)beG?4.N4.NbqV.CVlޣ1bARUtX)uw<]Y2p&F"ez UH6BC6p}@GiIdP9WeHE ؋@UR{I\ȱޣcRD&KIpZMpz㷻(+aa9 L;z~_}VfyN ΚM\)ylRE60^aSlX\7qW!tɷ/^W>NyK{j߾yݛ{*W9 i\Ar f_sr#cHR3 -i66ϓ2=(P.B bQ,%S,>RQ`FF)RHzaY+Bܼ/ꕛKZ(mԐK#P[Eo0`M<g2Fz+ 2$ӣ)Z؃PMŢ$q!=) ұ>(K.h@dH9 dCbO8Z9VزrԘ$yZ"0{FPҬR*"P"$ :fJLi/_ YƳ@ 䬳USĂ*SAYژ2{Igwa@ض d !DQvPP0EdN*H1C(gu`[ieOg;ُ7*vq7e8vU}f&>pYxho?w_~ 8Vײ}~[_8pFt~Mo?a(6B6󩪚6'Qc?sֺۻ9xyɎw4oh&@Ϊo}x')krK{&<v,b"MCs=Gǫ+Nle{I(c|Rumk=Xk.xcaNc6B_$+k͐AG x&TJ)OjF.-d- uf CXSۉu&0l`%m7JV7Ƶ6Af-حQj#%aZY` ̰isnN)2v6 dk ֣nY]7(i`ͲAKi Ջ_L\On? {V+ڝAt?s%xz"Єz0;׹LH[ fCz0+`;3ד1H=NUl^ Kr"Az3; AK;8x|NQ+']-qz^SoB pRDM z ŔK5)B@JRJƖz5 KNI)0XneNDk=^yA҅h !C.t )$Ǡ2&IwnaAi'<"%S@d1'YN69Sg G"9$ɂ۞)869YJ浑@-ap eUR|QiO` xX:+ܻoWB zGc@;r]ـbknh9zZA?]n4~3{3?]-ѯ)h܀7UTK#Oǫmw׷wMI*`Hf IitR -k> B`^ZQ-h.w?j^kŤ Xfg~%Ngt?{d; {Tֽ bߵtyvy/=,/nX.юtADbەК0z8mdZ@ C{*qڵ]#-Jk^H!Gi?ҏ3\'+PxpPx4N6썰C:Y^ C7_>aoT8r~3_HNmKD+5T٬(03B)KJHq)D*F ($-WwU5ԒVhMԊG/\Fh`%D%»Ƥ@$6YAݔou9@X^jqFz֍L.Xd=# =z\Xʒ݀t1{E; 1;AJH!⼵"FOŞѥ@( k%cBEe[d2V6:]DKʈ:%=sG&u8*e2舆 VlBFtT:xFrJ<lì|SkL,i!7"E* 2lǬ1^^Cswޛ\CvTZRUo${V[Yk+Tl URi T"6Vhv4sAʢiRpBB7͠ *hdcXgIlCbT".VBib"GoLgZtV \[DL <}k5Ѻ{];YJHJ]FLiuMp!0QiػHn$W 2) d=/3Z3VJ-{_$eY,&3Zeh˝JF~'5 4 ib}"l7 1ҭzwKCHK;Z 8t΁{Wu|LiŸUPBs@~pĪAU!=Zz rK@ƣ>lkJC(n ,sܶw7O: Z눤nڷ) mxW_eL⸾ tMv <VD1XG${e r1Ex3TZE M1IxדP1"\}؎ױךϬz9竞kI\k.w'tZƳFρlQx(<ĦɳCWIG_;41 +e)b{j h_l隰B' I4zՅ?/?u?K{fKOK%w%\xۡn Z8yFavL  ;L?2 e}IBc̒@p‘G6w<.y{l<>Is+?O[|zp&8bPud"W9ُo󳒽J:*dJv]WZqa둴9G2` 3`xԢa$gٯ{~Y`x_k܅f,2ǯٙ[/5g\cnC|k?Z%Xb/b_+]Z)Q;]\?8V|A*4 p FEN'qWDzS1eE F+s|}Oe"E͈qL67D-cLs>!C5ā|&O!P?d\eu5XId7$9*y4d HM!5!$āZɬ@ +# s8([Ѩ"4zA:ғ!1KzsNW{ =21{p 㝫+Gzv9xwT?\җ&\~w3%G'-:A%/8K+Z)!jclHlŽ y!%֘bJR=y'Yg h,m jFTrԖoC-j ſ蘆n[0Y`e-BԺm%;^*1'H vHw7șNxk۰vlHd΄2c^| شǷz7tĖk,0Ic}hNw_n~)! 5 9ZPu|+4Wl}H`pE NjkrQUQCځΪ!Ԣ%=5tT9[Y}\/DM)kcrKBtZ4B{ۢa`*0٪D%RV@y~B8CAT1{~Ip =l}DL 0̙M'…RwHZM R &Z`C[# ;cA:9^R:G &PoR)xDoIޅRJ$:i4~5A\n?TeI4ZvlQnjfaUw+ݗw܍Wo$2tm0:i.% dr yq_ǽ~us7[0S ,"p\+'3}اAD(uZCx"X'](5NG~j.;8#vp#Ŵ|[倊IBQ:OdP*si? ^ydʺ ?wv{*-jͿecwW)PzĔ]kU|w v8ysoD.I}sֿgț bqʥ`rXFOE4KS 4wn$Nۈv[Z\eJn=He b6 ZCS<; dByZکj)ohL @C:ή{-†Y;\c&:R!j.ySЯzB!J#%`նe`tH!Zo@tܲMW49[؅Ċ`Pu w2#Get_Îjo?~VwxjS{slȱ2Q\}-#u>;g FgE4y1Ã)얽rXq.(]< *̣8H^: }B|r_K٤wn2R.5ePm#x@T?ً̆{߾<`%n%J9B+JoHр0E^yY4Q,8RE[z}tbPϑwnχ\"dc̝u0$4:|C;k;w)&`F/wRNv9o>nO9:+f1$hZIrXǖ\UϢزiN=Q9OVt;w7QjfbJȀFF.lon]`J ;wx)$v'6 p2Fyt!מhĬo[$qza :W9$p*&wPD¢8]rgߑ 9 'sL Fb;V HC䳭Ĥ06v?3N"떓*(;wd|p)evBG$ I'*9bw)]'u]yb̋*Z_`ELPDjYcSe M^M_N`\MG^!=y;(`=Y Sɝ:Nڙ7 H}]Z.K9!'6*FX=U@S#teC$ .1~~wvX*˱;aWATWhؘQ]ݭq/?V27U:^Xbx,1\!Hd Sz(PX(IVi.kɈJ(C Vx[~+}6dl52q/[/I~[|DV \d@nc4%s)HOO'c]㖑sG kY}Z9*GkřP#mkGS*lvk Z ŌڭY}]S-\[h/xU;v C Eh-)3uh)) Pk(3icCA@yY~u?e'յX$eۦ e%HJEqkG2j,cZ 36WH \3 %`}uV&r qbqKMF!W[Ό"i- `K%bI Ž rpS (Rwnh2]-~=ӟtK ; u(=Z,Efu oj̲~ҊjvR)40Qh)m)tg4Xtp28njRDT8F[ƌna(%J^ J?}gydhjqmj, Yן^;P|y/A&opyɮ3j[3߳ѧW#_SFЮf9@:>^JSNe<̖ uv[fpTN1Hp%Qr0>{sݛrTa5ѷ6^#SUjFt`&(6Ui´zhe_WVb٠b.F;߳Mf̬F|"LDZT[ (MDXT3J[ݛʫV6޾;CcP )ZXJ e%W,EW%eJZ>nHs k 掘5 簭 `auxkŋa :x}3(I%^yY Nƙk:DKؘ9a=S|!YT/zQ~Gg18Fe~KPcԯް] ( 2vmb4ZǀdB]V{=]-;C\Q0I˔m`tmp$*DyU <1DiXBΪt]/Sb,kQ;݅'0Yu6 L*xB޲e蟲p_0 D x20LQ10104䌠Fh7CZ!<# PU4w3jm]ݙF7s!!Pkccntp2peӵ` ¤͜=>m!,k䛕Ro_yh4C6l[YAP0{10Y޳L:7{-Mv>Stv̼B!in=UF{P Ga'ٚB*; 4n;ۧY/F^j4fttp [/FxFs o-54ި{cUcqcVCι5 /4las K fn[=Qݿuа2D!,|*Űܸ֧9&K֐{s+$Ƨ98A#H@gilm~rv.!>9&F[1»F(fvoOcOTᑷ2#o3y~tp+ox#iN}AkdxRsbo"i!2h`+8X8+B^ e< R (1֮9*d̊$֋HLYq'+"sP[(dRd9OJeb0bA}5hCFvԕ6sF\ 8Z$ ES !E`\,8KYX udsRL~oԮ aYI- YЛi?LJ$P^IF`L%-:ʋI{)k51f,[v-=y_Y~!N˦#p8QJHL;nA5 hx_fL7NIUT %6trDȜ3$NdCcıVa)Q0ZIj9u[W uWͭPLו^'lߺl^<ZhrRf˶&ﵻxv>K>Cv$ /j;iC.!kwbd+0q8\so˰8QRkJ9M=ݖ\*埯X;ٮZn9wZ+Tu#/a~aA6W rcE p5{{wUV-80 RQϮa"v|E/mVoMQtڧL!Y2{$S%Rz6!8J$n!>}f,\KcFx7|ck7 lޝY>=MTb8JSi(wtЙSGY#9HP! N}!aL=<4[HGmvgcP`8@!5._Sjxq"4FߞfI\%PBFS",Cg(O!LoX#Ŋ\bH-A Wiӿmb 5"Jv? m&4IX=Andp$# R~?~/mpbZ+'M#>fc">rg  QSm~z_ROɱZnZ9 d@5tUN9e,8]/]J6@%ڍRf.00J;WnBlLOCrL"!(XvPąǨy*8>TC½?lmJ%-!RS)vV̊PgخRC8Z&UluxqCm+ՆwoU^^iٌzWXj0g5ezTO=g`܍m`؆eJ3 C>NtZ6@>y$]tm+(@z2m'jlrwa«%y`<[iAKۓ?Oc˫byw{uGuѳow:+٤`>eגQ1/2ޭatg殼97_o{rJ*4j_?M JI`RDW{Gbb%(ńPV6[ʓ\ uʊ@E$k8$GA .N2C#iMdVGF焙\2LjȐM{aQ e2NXr=lpӂpcZnԚQ8^ !/X.!)M?aDI7Kc!UWیGCUsd|LH qZzgb=8~hԢ>Bw}h4|cKGK̊ug2#G9IR#k-8#~QPIEPZxki ‰6>-gQy4̈́H"BY`QbS,(Ei~O:#kIϒZtQx9J؉h%(ZGgBdMKpTˀ`YC|\"I6<vO1ÇkV@+wyEp$"/1Is/ݱBk `*.u^St~mS(+TzbevdZ'S2SD1O oa)Er]bViQSDLwd2%31m^wӀ8s᧓ZvYui>j$Ȩ"9ϙtv0PRU:'MKɐ,?E+\1j SKnAamWcT)dz`_l%gc61fu".N!UW7[n #E۹)'ei(+HU8x) 2OT7*eїByfBB7OBcEdB9j+=M4'X3(xJ1`C,%|XD|E{j!_fJJ2'ɔ^f}ß {u| 5qfd`8&anP a,0㽶=RBE\Wu>Uͽ)~ u~ Qg ˌ`S_w^}DI';U+p^H{η@d +UO=&bZ}LI^ AùHB]8"KJM.0B3 Mybԟ>"ٗ6 `7MbwZQ88vvYԊ+M-ԯN.mręc)H݈d\v4G UngX_su9(6$^&3^_F:V`~nМR%%}#WL[BQs{F%@; B7-I"cD^$FQrH"ʣou1s˾V"EW--$$ڞdm!Pf9؂ J;m@ L3;hoQ2-rb*RHiT"J %$f$N D'E؏PPϜ/~!ūAdu \*BqYr0S~QN"DLRO6vLQˎю;r4D`-R1MWsh,!Њ[ 1MpwnGh eE\>,naKt1XȨ;+eBCߑI咰Q<^yE0_G׵5olKd)T m s%\| aO:@zq+Rgbr~giyZj%1g}evXݗ~1~lLHB+Â(#-dOۋjii,n^ȃO%v~,jB+%N+R"4eM_ԧ9 ޒpY(LckML'4ԕ^|ͣOWf+B -E  ws`j༚h AT?`^CL`D<Y y07ǟH)0u'j!F:Hbkq-s]d+%"mQ #,"Nfyq)s^}$j2oP*ѪJv5a05b"$"C(jN^c_إɭ竨1.j:N=2F+2!rVX>,]YsF+ zǐ>VXvO(X"$RD" $($J|UGṯ%^&iQZ0zTbir@,xKȫKR3g@6,nYBv~;*{Ğ6onv܅`xwId]gNO+:muZkRjxO+oX "l?̎3`Bv`߂*h:- iM}䠉Th75b[oQ%u -?`xWL8OjΔKdn\[0\~ӒK5*7&;0n2;gs;D_}k3c5ƌ @/f>]OpA' 3fA3hn W"{ɐeG|镱('dx˨ i1r8D*壃!.rK m>?渟Soq4xwstts3 J[~@m<s0 ~?F/aɱO=0Ӯ4W>8-No:[Je|מ9bx6 S4CLZәe"` 0 ~=L3~wEv6.Z Q y+b ɨ5rmcIG~W=6*=K=K=K=+*5! Q$Cܧ!(f:@H@UcYcB2OUEdyLGH QdΡ`܂!z4z0 8m `^E$ ci0LOJGq̥4^=a K +M\~  D-v| Qȥt,c݄D!hԎf?dӻ/7MJ҇}PXa(ln@L*/'[J,Qg|KSt ]L$@״˒LaU6@"#>9fH M`ن1=XXW~ɺD"˚Y]*J :M/nT!. _t]ל (Z+eou+q`C7CM=h^kH%mlKY1Jovn~5>&!iz'Lﷂq- qݏ^Gcu z"ܛ(CZ*dZ]}sѷrA V_!$mHɍV |y5ڤF]Gǔr1+ wijD^{0HsANAHtuQɅʥ%nB!Cq5L]ūϟLT-8oR}[6q`QBs^|c槡q&Պf !YeE9#)RUBx:DU4쐏Q`&H5'ZBBYE X! Nh0Zbe"" c 4J""Ǟ rBKK QhM:;=VkQzi>px+Z0cԟR߇l7V}kskz󮪪݂D] x+ʈk!Wm2ĢAony5Mq#18e${X;o]c58m }"%{9#0Sa'KGayH40&>&(uPu.FRk_x&u/cno.S~9קBi~0lӋ.jl!Åa$y=yqp,Ļiit`!uEpx7 ~t6_=?IBg=Wf6f3Yx 'hs-(0$*`>2@B۴_ն,ƊRsB)32F ʬ@ t kLT*d(L>`"J"DG!I%CKUƨ4S 1uT&zS,[4 +"RhbӁMl:EXbbU;0$[z/~ -R T`.@HzVPron4ۖ]}xAEg n;qG*Xoאsu)׳5l-{^y( /lVG/S)wTN;3`}I#;lZs.-$BXΪiٞ5=M+2 ҭ:,W>,v^L-Zz\43A˽N.hp(Wv ζ}hSv:XJBX{ӹQ[gaxZkDJeSx84*G49FbgȲ:i[s)/66*A2fy |muiHti-5:823y!`O1A5HSEhh3y9Q{u2RgFJ#0K0F. l $v6YWFW m8т1)i T!FkB N!#B0"bi<#!ક8 *uk<9ÍTn\= > 'w궣ԛP q}Z:=w攅-PwJ-.x%aI2Xõ㼱DJQ/lp,y(! XRC7O[vTUb" @dM!)+/(3nuPͮ}/gLn4L|<|XkV6"CVI"\XSAe+q@R }tj HѰD\z a]3aoF FvH<C%:h; 騭nXŽs+gl}s Yc-iDHXB_jeel9 jSCM  ƽCWO+(6vnW; ŮoZ`aLx{DX& +@\RM@bʶU[}ۑ.3)@LM[~ ^IK^`mmRp\%7aXGBO;ϋы߀ pg3zV6+ RȉvzNL=@uwsjUt&^1%r@1U@0F!W/6&4e&FYsg4N>,saڮF({zfIjfl 1SuJ~f/F8Nq!']׼ݘTL_TxS*M>s 4X!{=cσvȣ+vΨǡ-Ym¯Y9_e~VGyr#9~OmC1/_VJz ,'90L9|g(EaLcр)"Jr̴i`ȵZivzܱ qG%4k,5>/Ұ[n}H?wX[ÚU(U$:!\Ct&f }@%*V޸E+t|>tj೛^yeӢ_cl"_4cAyzp? h6lia8-(i,l|>? JOfy~V;{hӣć&¼Em$ELW{۱RG>hTcnOܙvk~xZQ!!G.dJC;jMn:Tw#oH3ֲ iVh flW9B+kDB,mc +(>`๎uc#v[I`,//vIdYxxη|͝xN@0m̓Y-?A>Ms!oxZO8Jm{Q"?P.X^0 fC.($ 9gb-woS 3(nCb ]bhArORWx@w hd#9c3qj&B-Mdc?ëh+l^˚{ptLRe6+uhqen mΆ+ao`Vx Û@7?7n%nbV(r:x GѸ5~pϼ?8Xj|xes6%= ;M- A.bcP!7FQ޸BَLrߠ!WڒCJ |2ChQeB ߕ\}Fn%{L'2*v-7ӚѕSOݘ~{7~/~Vw d=\GXcA``A9/KLH\rjl"-Ag=Ks=.5&ͫ7xu@d[Ӟ ی$aTUnB!KͳFU9sAHJa~v?M9(U+YTZcgg*xɨ͐\尲>ӔLBHDH +Y~/-%FW&0, i47rBj ;k;e?/o(J 60T7We_[ͤxV_L'+[fqQͮ~;fc>f%ͥFϛydj<*`dIn(347D ޽NW-=hezpmsǏqTjNO LDZ=0}Rs\͹ʦ@ĵ>>ҝ@ 1D2:R;'j$8+w Cjkh=>:]CWzL̕x6eqJ׀V/?iELZxuӝ[; hRJ M)'l}PPc_8X9c]l<=`ӝLMr7R6V@"JS9H8=G.a}uQT]3`rHTl^H hJEA:: |@ $]yJE4$Jʺ5eyV-FJ^TWqKQkT#Mm„B94zNmOZTA?۶Tȼzn1Z,u-BH$ U Jx0.zt.sB;t>mg$KlowdDzM'퐑a^ޢXke[V<*mLTǍPpªG,";bD6O{PnHo|v0Pr&vGDTrh L%3p Y&kZIv\Nv12aR>4oX-C ƌO{^ע)[_ӕ{YDg)/(9D>Nmu@~ARLP݉{tȈ\I(Y$/3.AR`Cg6SkuI)-WdqQmV93:;gݿ=8Ktg zGٵY_k^'pZow=-zF`mAX뇹 ]~H=|Ko.^}ҥzHXgAwo ҍ'Z{a̩_‡p}~,tT'I׃ިߍZ=Wu{_ä4NﮯvO]oS>DN,&EщXH/Dhp/|b~Z=oGqd%?~I{|ѽԹS/'MG8e=‡`H*QNdq>%9)㞒 1]fvm;Ga"rRJr%0c,OwrlS-6gm6w%MߖmFi3edfɜm[vs>iS祿|ODže~i7{m7L)AZ_ch4f3XIqֿaōѸʴ=ƍ{s}}ڋ˫y|Xַ] +̾57laz8덆S/_"_ulIl-'!ܰ<=xȓ??Nt {Hr}dgmpX̃hcJ)v5H$ٯ>3jhƞ׿ӿڐ7kz3׬ 6:!vzMj^=Ims}vd۬F~~,>ko;/>[,>{Jvwmm~>=8_ !sⴧA>l7Jr4y$k,J3w$N,q"e}c_XxAcDBQ,xg}a[!ˮX,1?;Z-.EIOXE}h wULz[SG{2(VۍnCf4~ aLժ^ym;0%lni|մs}t(vo U>؀qCmC,#g06h>zu13솗M}$> L O<.g+!hhs|ўp ; <9qeuOW~R ǚz }[|p9T?vh+sjt~i9m+AHj|{L9gjZ9i>0PpiM[O^ fweU'c=UZ]Utj:cѤVY/;Ҵ֧Գlbi6sjjCA8+ֲ TmDǵ\z˴s]&f1R5J+vARDVn”:*L*\;٭ XdIs}ZUtjkKnirȕl9g;7q0`};qݍ0.˘,ܬt ,j3$KI s?*Q~m5:{Sbk`1<%zb=*zH ls֪n}#nQ!qNc'9ƬgʛtyP>Z-⧭ZX5!%l;droI[7Qp0+xRv6ޙL6i`q_ 8蕊潣9᐀4k YQ1 /dm:] n@7@ngE?vz=+8ۮNGԳ#p ֤Y1Ucᐠ`ic`q:+&v4>Z"[lAqziV d5B~oo}}6phˎgrYڊދH!J/t_ŭ-|vB2 *i\y @ 2ȘG PZG1ŠSCmMCM nn?znK F(5r+*(H`v@hE $waMΩ):V+zR\ 'aF&ށ5[ fIq Ae zaAog7sn1਷S A3"C<kg !YH+'sW& |~{pUI:FV,ՌX$1a4Pi L02xn+YF0¥aƙT> كƭۿqʍ.HwC':s "l%AZh̑82!"Y&Dᤢ` 0zt0/b6̫'Yb0o]MAIbr,XTaU_^?%fRD8mc4k;a!E$R*"%jpbx`3Z'Ho[6P.ȀUX4A `٩s4GYV.Gʺ,~߈kATtIX4>zj3o1X}cYh p&FˏiOLWn)E(T  )k&IwqQFuL;4ެЇ)\&LO!$ȑ'#G!9 /`%'%dĖ6fNl#UMIJ1%bhƔ4R”Ɣ>i!eHǺ,'.oiB%?M*lH sC b 3+ijod֨o~.dG6⡔QՠLk;mt5@"_!ߥ} bT阻jD X :`'8JPyڕщqN@NuG/nԶhϣ`ͯodat#يNJkMlO6F:**,ORjLB sL'^Kz`hf 41vl65f}pLAEON 뭎KIO^>@F !n~tTЬV,5m_  ^FB U! dÍD՛b˝upۍB7]: OßrgHJ[3ZaAp=)T9anю iDt}"d DHxAY,4Y 'ZaZzI}qn q@x)(VuV/o\bTn\b]a4ڼȺC]r;bG*HZ4tP@CrCr$k3H{gN{sf"+0f/fI.,pٴ֧̖M`zPh ҡ'p k{?dwg2$hmVp4R;ߵwK5;[UOiOIgR]gmֵ\s=n).QiW&:by冢+r ͙jX )@D/5exdŕJ+ؕ^ u&AC#,%aHG@1-M3/%!\SG!o, @`a#%.|]u;IVr7d_L6Ii]pV?ў1wws3NY$ǝnrط8t''UF}dWw6}kƍl dba >s îiV8X '@_Hv\hk!]!Χ+$ UQкB]6pr.t˴fsCmqijnD Kyv7yܦ /!d]p!1emQD9H}:i%g "HQb(0onRvSwRPw7i}%/;z)oi<iY!;ϦM|xۇl+DVS^ CPYoXB41!Rj@"AV&JYɀ^!Lkd?&_ :/Ե"Q?_kx|:&wF G9 Pci@Bx[h.B$ 8[A H1DFZfVB# ؞ٌoெv%iEgq~< zwQSn8c2s0)]m+۸z0 YԳb4O7^sBz( s8Vz <' pD"7tz8u=b૯jDHaޜ\MLqXqe&A28_a#,b0zۨ~yFN>_0EI R%?S{xMm\{pwxf0/ޅ)~xFWHcweTk{_{W}Bqɐ5zuI}F}U+yg.acu8҂J| HWXbM# -)(\r~GG\V-ۊV" In=kCo_{"ӌ|ߘ~ {>7Q]J&xBhX."b9hpu#u^_O`YWEVD(F=Wr @^H HZLu;JuhdqPCTCv;1*Ďs;"M+jߴ@k1^BIK?cS}@z- (AXõD{)`arhZQѴ kD "!{NI8yqu9W|D > -A;x0ᄄc c+91lm0[*a nA=j;*r)dI/rIt-Mpѐ?2 (#l\qC,ZLZ@'t\2[&{ ARFIIe{ȭ9uWn>bc*aZ AC /)s8kDx$NJ'` PimxX41M# LUC="z EcHA9D8֢Uy":EtȋC_S | Wl"NC".t; nZ?{6}Cggg'̾hMGHJ&SM2MRͦcLlQL$٦xz"d:j{s3U1Ļ->r.Zxs?۶9MW;UqZoHt(4}^p k\4J 8gwfn0_}nT$'H{!\ӹsЦJtc5JiItA$LcAKAKtT PHFG<\\~6Ŕw W}Ÿ8?q)*TU9J*F*B"ĺq0(&J_GK7)qlg X"M(QkL5րDT$HZZَed,YL(Pn rK#'7}7 {lu^Aa낅*MY&4*h%pY`QmMfO"E=>qAH[|d3:cNT"f9_zFҀO+i!Jҙ6mj9KnLIҰqC@SC4aƤS'FDa3Z kuIUEf[mn]Xcҧ F/4_ޜoƾ[i| vk+C^P4mB=%?׏Y |<֔b^kW&YT5ZL)PWAMcHkY ɑQCbU7P o; _D +dW;o oàfLu;z3WyXHo8@iha8F)OTtX.hK V;aKbitIq&]ΧMIP'6J%D1dܮ"㔀} 'EI , 9AXE*7ˑ XĚso @OFqL]bҙm9cq\bbraeD1/ ka$+h8 6KMg dàB3x)@qg+-e>;ivyꦖB?kۃn~t{膴CpKVsk1 'A9ճpW4ELJSo/L_+ 껋ݯH|P wLr:u m⦈mFƅm0;ehٶR;,>a|BwMu?кþi ThEBҕ[U㓧iS]'۝*nэBՌ-CONW*%ՀpL8\.䑒H9%qiBtk :k׷J†GG'aF'fDLk:Z\P 4C[_nTYy>(:@d pD19dTL!>wb9:ag|=oRv-l/|PӮ]Qf?_^絻_(4G<.0،d^y 2J?ļҔx~M%""+1I3VJFrS->3l`tcUL%z Q?OZIhO->o zd'o;o AxhBTH+߉s\D+'aD\5\ RHd7\P ndà!U ib56<| >Rf S5 R}1ϷףM}}TJ _mF[FpZ+6ِU&{soK)xڑ>Z8YVK=wA 1&#"tJ@0*-(g x&Eީ ^{v]'_[Е<[^qdu)fMvO oMyjoi]klrMzN.2~z[4Ϥ\7 Y4*E(jYJ7^7>t uBE[4p-)݆?9;95?xcOv9ϋN| WkS+zl.g,,1̨ͫݫWj}^T$Ԏe& xES&qt+: 'LɘҫkFj(fq4WrR~"w0kF=&&8"Gz[(Ϩ:/Z@*xꖬEuK Gd^dWt)1>Wpz<,jG+uѡwŸH:uU4! <ۿk W4.`!ty5?бW[;98EKv鱹Rīꇻk4,vX2Ht6WFK91u(+RT9{,k\?/+-by<0kE[1LFAfaՂAJL0dx9Ws7=(Ʈ`-ꦄvUdwwl|\ۭ|+ 1wio/<ܼefyһܼ;duWoҌj5 $urϹ D8TS.q&P̘2v3sEq<_:%L$Os#aŖp"H*YS&,3d*A)YSYjdX٭Zju}:T'ᜎu`l= /$&['wPFVkݰcꄱ X9s)$d&%ZsũYy3$E&$4pky85<7 2DU A,AfyHbf #';A؀f`{*+,3^hb` .=QHQ0 ԀA %*a:(Q(%R J\#i8ɸ˓(P3q:AqLo&e^/M(|u XJJmDrwsBgEWN\^8Kjmsz ~~nvgTr)m* 8fK.NL 3O])KsJ<[B__wE̡/:$3j!&Xs,Yo/C"e~?uţL &|hK8>F9Ԛ#;_7f>#Tg~v`5G<uoߖ#x;& Fk/!0gh}Ckg7}MB6Mb~HD:v&wщ`;'Đ`g iήL;J!"`h|»\s=~Iiinw { i9K&(gS˘;NY*`#y])%sln+޺nā/4|T8,}(! O>$hv"MV|RǝGfC245) p l8ƈpp7JI{dd&WROڦ"JG;3xTixژGܰ[ ?jt4{o%[cu8 |NPEk0/ 2oj$y0bB>LXtwYk]PmhbD4c`\jd:O-2M\i2$,yFuWr-9o2ݑkvDXz6p_ V5^]NNSǢfp,wM9H yAf3fXz&'B4ʦD̦(BScd5dk(&Qv]d&f9@"뼙Z,eXؔ ,#Z$Z%&^]τtf[YdDQQ#SYbs.s*𣐆J"sf)gf!(3A,Q9xuAczJ=:M^ڌovt5Pp1[ʅLMn!NI2:f ^! i֪/D Y~LpVp3TV~;7يR4":OG'q]ښ&>IA3_n*M8F[ vZ+))r qE9=pWq&)b"׎p4飅i>8}ꘫh L%rtl^.iww0T#xo S!DF Ho Fn\LFՏ c(ұy|8* @L`aUP(LbS)3T*g zW. YHy/מ8NI_ ؋f3M75*/gܽ f<ŏW|8˗Ef=[P+mv< &; :}4Q Բc :clC[9S) Rڮi$u~*R /69sۧUVesZ'4\~W%ɤnOW> RV|}(c,ށبnW- s-{U;CAFg'wQN*J~? O(i[IXv!C }[q-yJf>O ׷BwLƴ# ΁7P\|Flå۲u(̲nsTF,"&tT/cz;oM%߾ ]8YxƸI,rVgGɋظC^r/,1䝛Y,\]әNg)Pk4y>J 0K `w>ْ& Od-^6!hu|3iA京S9O|cD+ "SDF]bp](k E.xw=P"Cɽ`SbmHr6;7ٵ.Jj8!KqFbF pS$.gVK9¨ QT+7x_r/2\a|; (ƨPyG3k#rnEkmig[k_NmpJ2+o!T:Eb0K^6Tynw/NQZ]\Ly,EŲ߫|W:K.3)z2[*VUtP*B@(b (K,ka9ZOX\ eq'L_DW|>k:x+Qz/)K(7?39p3^fD9M}͖d7҆+\Q?w)ij^ /3J]Zih]Bcv|OY۴}t*-4]$ChWΙ⪬VkL{sxp࿻(\N#tMoʼn2F?v?]ś~?=| ƋGQ:/N~>;;}89Oyqs؏7??Ngtֿg3pl7\UYo|Ol-P!חr?˽#/oj C^Z-jeR O% O;>Qyl̍U!Ȕb6U'6ޙ-@%;S %ZTQM:5)ky`&ah̆iXzJ[Mi ^Le &%A 1tةGRC+ j@ƹSXkʦp3N ʜ~8]C3 E*JP! 7_\%uw\&Bײ^U0˂~JBY;Z~ ^+[P8]RJhCڗCnD pCH|IxݳtJj FPL@bl<>욁&f4$j>wM Ƞi3^ aŒ:;sF姱F*M@.oG6B*O!jV+Mj9/iQ]b6i[|^^ $4;_(Mʠ)D+@O<pMi5BNwNNSV(^26dAԬIRᷗb*;9 R V[уz4#ϴЄc8R&t!c ebX(ʠ FVm U!5rڴ@mw<`PgMVF8bsm'Δހ>׭ oZ,kx>\,4+B/[\>ݴ9#J yn`hMY0#j=﫚.]Ju)_mKߎ y)x\8z=+8S~.p{^,l m ,(cp)T /,\N2 e˱u2n7'V%7*Qq9-VưB2Lf Zui9)Rb INԝbV vK-+|ZmX({JȌv(ŃSf A4O_P8c5J:Tp2./ .Rʥ֫܌<u]/JlQ s-J*!lQ"h (JV[w0`=vR0aS(FB SIBsA%m <GaBAIgКhŒC ,d0miJ]d%ZCZ÷zj?N0)6$Rq6;762@2R)QE^r4'z‘6ηN=q{54FwNaK P=#/+OfXz3F+'3oX@mb^V9 CKa#䦠/KW\@J"i^(+ e>#v?E/+ 2˙8T\,ϥ9>IE):t(5ya6j =]o;3)9 YZh E/dd#?\ K^gJQID..Hq%<ѴԢ46HCɭʣt,JNm衔%:mV tXU>9ex"@n//-9jD mb*b=ʦ(k+eTaP FPe vsϬߍ^`)\L)灳9`YM S)iT4LbBvARr=FQ4a 3A2VX,:9TvF)yPbLilbِSJ"Hr2%b FhASFS4t^d.#e Py)12jLu|DDRVD3>9b94)9 NG@R Z@bk=sdu$2t1`Dh%s:&BKC619g}!(GmHxI&^_:WMI]sw7MBp}nGЕ7X˻܊$duR?~\`~sW~z'(/ߔ}迣?~(^w,z}vv:prL爱o~9+N_ft}%2٪7RfG'saZr ]Mq\rS-_B=ATQo$E<ǩ]j3(5AG`pd 1䏈0jT_,3,G\lYR,$I'͍/ur4 cArbCXh01D4.~ kܳx4+Hł,/:i]F?cFv(PAJa?\jG!8f!ůs{9 TtP #%ojGRrNGi 8QO5)* t2d}"gGZ5Zfk&2ccQ,)cW)ڒdkdGZRFHu˒l3J+tdAĖKbظtW4idT3[패2fL7t0*F*Lk%z6CԒ1x9y}W!y B0$d{I1m=fRXj8z̢SGO&D`QfyYūހ+ĝ|qg557By"7@JfqzD= \ Ve n<ڙX->IΞ(*;nXSvlʤIv+UCeX##ϛԨ*e!aWAVà6h5ypv{B+ˆh*=Knj A㸷Vb}jN3Ph˯/v.5ӝ/^vRA'x(~ϾJvgdXO6^)~4 ݛ ޒZ2By  x@ՊDrnHdTlFƼ=P6 D_=mɫ5G{ vv[0;VTU۳zdBds830{Iwagi9!VCgs!$7*6iA`}D& ۜ|T'O '#dzӞq)0VOVkaCj)GtDSMJɽ<9&C*^YX!˹ıS9zF8iX ^bX s;+7QKa gV -$Ù.2*KL!'OWJsX3i"c  `"-!XyQHtxHk%xB0 g4X#HdSB%`1IanX9H 'ENU@R9!m4;̰k-1UzJtGפK{),|8;M$\SWu"BL0(s%2J<,)aAf\hƅ1"APbaidureo%>Gi?or~NuuG[}.Gy/! rV[AgyI39DÞ!,Bx: %y-6>N$ֺL\JȨe%/$شJ\;"2Ũ.%#d U%ZBPMaNa(4Sr ams, EwRsFn2>|) K 3Br 4T&/ݙE'A*5z g0갋*;mR>mcK`$&85E:˕4bfQ oQZo_WL4גŕpTQdq%v E*(R&(1)*mFp`!>[T"7D*=t;L\Ąǚ: KҬ av1 , Q%idwhe݁EJ8 ;]Vd%@` ުpf9 Nj谙6̏:i>GUH`(4P* $RH̛XT"Y>0%AD XN+ |\ 7+Vwx~CrZ'Iڼ_ymrB7/&mmDe [˙m?5ݦ0ytwW}B@_/\M=z@Pubc"sUfp6%|d'>O`i' Y B+OJ)>z0_(\t99HZdmk o? I>QѾ"EagsJfBdlJL [W9c.K1U^Ȗ4RH/ ń2=+4]/RM'(zlyl:]UW/"elU#qTsߥjO^5JUOMeD5FcQiϓx_nj8MWl eH%*1Һt4 <a4]}D#:cƺ5;/%PHx(^ǡ{tGxr7 y5ݭk>KYE 0T)OŎ/ =Zgaƅ",#RGfotEaڨ_3;5LQmմA,"CJZ x=؀9 Ml!&4~cOo#K8VTK)=zJ#]VbK]ΏAB)J;?>W! U@2X!nAee) Tp1[<4x1uΏl0LoZz$F {5DeLZ9PCQ㰀.Üwn6j|[ p!"pfhUŧ 0xFj$GcBa22sy^2s28A""(7$ 5" "a!tqfO.&_po&rM?f J@3|!Dy"y_&EkN?a&t z$jIڳMvd|իA|_|zHŞ5'wjp;jQ(ǰCH ZaLƜ  &+zX[`Ό2EpnSJ"CGY+!iэu8bbG[&c-*tjK.ोX |5H'70ZC'!M4}#Hז piWg {ǎGy1h*~.ƑKM8"& cBxpHBS};H\RGo.$2:c!ǜ6\TOVT -Y =mؖ]ߘf>L~657V5J|DjF=Oz <`nFV_k6ґKJRd8P6\ !d/t*5{`+ :K5r09Fi Kț Ze__uC )G@ޢ$J`mʫfyAvk'my V%o7A{'07_g,b .pBSmqw9i:ݔN T.-$$m6m>H^s-o#:\j-Qm r[^5ZT cp} WTOy T ֬Tp|DH ) o% oX~%B@PbKRV2Ŵv"ذ ,Hu.݃TQyp(2ҕ(ܗ$׿RT? g} |165%KT-x|U֬' }֓ b`&N:, rLd!^H9$ ٝS&˽e^ 4s] >J~xʜ۝6)RS|M$֚S;!Nj)nGBP!434pXFqW0C7{uI0@\.IIύ8u2# ӆ 9tH8L:cye?\j sE|,c ʱec/e8JscT ڝTm5>f;O|"Z!DeהX9J'>Up}GS1_1NT= wӋ^"Sc։J~Z{ a>v78 MeK ,޾ǻf$[kJ5\tBwoS#"6> dgs0eWacMp~)` ir`7"SsW{KCr5#C8slI b;grVcZu|QA+k?o"#Yqi v}?b2& zBtXJ(|(=N_rJBw}):TA}A[TRx"5[*!YǍp;z;ituQb^ZQd󣼠¾}\ܢ![(nu;}Feߓכ~zHc,$:<ŏ|8at%U3(+\*\z7hppqrzTkB^&T^P7Z!xT bL'u&xKeo-Cc[Mpty{`{7@ ]"Eo)ϡ+9k Hw۝ⓛ.qM;JN[| =}CFr!AD)y3us~\3#1(Ax*RXp_.Q1*Q^}~fEv贛6tv.̴kOKG{7 vv"t  O/u~eñ bԠF9KyfFO>;_0ި+\HnG PHӭŎx@5MàDC5, uSph[]Dݗ$o?#z/8~a SR`y@Rߟގ!ڗ vp#lз Ԕ k>gUj1}kb{*\-@/'n 5ewJGa9nb7٫}Ujj9vyajxgajDwmI_!e;󤾏b`{ F&0(}둔x;()0~~]]U]Ghq>sDZML b,P{(PU:b"u}3gXUY.JUau&N|4 ×I{VFh_;STX FI-of;ԔvL#9Oy'Em(\b2K=m@#nP 8X䢻jm HS[/"g^ X.oPEY-c[ v%&zJWKbXNߑAJ(XHj<\xi$F2̦H 2cx̃CV'u0KL><()U&m^B0tiD+ p!m>'hn>1#x|~AU"n2\-;%2=џ'ך*8("|db*A(=;MnWӮH1Lez|Kn>/f:5ﮯ1"X^-BSko7LnGw`o]|][f4$|qpoΧQuFwCvTkJlZV9,J[T`E 56-O4N$_!DT@YcOX+g_'­y޷Y hs ?܄_:#LK53X@ZʈbG&9Ӛ6g,b?aPNw>y֍n.y.eCue^0& |%ﱗO+ܒ]o|Z0s8f'h0h|2?j?>j*kk H"hy}|aAS)gB/rbU5T<]Q }֑)g+PHbH*i)&pq6i3PLD&ke0[Hn$ @1A11؋$"Qȱ`ѥ \6 4MS`<CTHSfBfE&?JT"R9֖3▱I7X>YyS$XEAGָxuRgo|sG (^fge\^!h AFΊ=~GNzֳq4Y^fw1w: cW1Ȏ}$Řig d Xw.fgM9%8k*BI IuŒ!H}h.] F~ia9&vrk`P7iTR?R8o( 596ObOm JDN]qav׉Sp'k kz7*;r)sM -7Q r+:]>NSVVJX!^"۫ '`$Aw>cY{X{87wY_!=yAOee,`.P9V a$U]CIt[?G@B*] IDXW_!YyFqH>qhz[vι}DtEQD0+.Gwd4UѵGi_6Rݧ;FU'#~RL;BRkg,Y* Gc3U8:Ci*](x=NCӮ շg|%j8K[D)\$W*@YUY&D㽄 !)G&:w/(3j܅)%U6uUغw t0t;\Z˹ ~fq#s1Hν+rTAY5baʼCBknLR},1:(,*g (V3oҹV]@0$8e<6&o$1X.(3G}9kWXLO&329UA ]nAN22d纀2p9x?o7Q.xw 3fZKV(cb~^,O0z)03 0Q_fx8/+4m]WSx k?LO0uK.j VPiݺya:lr7ðBVZlR!({X/Ȋjiw(Ϫ̢XQ]6CqPH7}= QL=hޯ3)=9_jqG`ڃ+(f@F ope͔Ks<~`aMU4Ud꿑fͽYu5\xgVu»-(Z6^_T'"GWP >)p?>,Fqrw;([ Rx6E%,yPB*@~jG9u.~Kd\zNm@h-X=Q,f~M- Ѭetk0xso a8 0&-O܍#- ge NEB@m Zf"8I hVPchS&_D/vfź35TO^.k5Sb%I2$ 0y]̓μT#arhCrIx咳S2P}aYOK$u/DcXi;NgT~9=ZhyON05U$mCET/b*԰%L\v͊2=o/b#P):6ɫ-Ҽ/v`mi<~91oFJKI%@:#[u {<0łHkG #^ -X=jOپ~9 Q6A>6>=E"X3.3D9'B˦z/dz<Ƿ,p.j qlyuϾƄzzYzRŋF.IAϐ7~Wׄ .vЏ AFJ>Rj0'p lP8,~XLE%W#x\W]g5R=sХX @o&m^% %p4B!3˜ςN +S7.Q& D(G %9 *rMF֡k[!ܚq XGlB?r|} CgRD$Z{$w@A;MUC!j Va@5ZA"#A50V$K¨8dF1LffBPBL$/T' ec*$@'ap?COSj/j3ɩz|xt(Gw[懓4"UWK4I (W^BGs"rWMɫ%JPo tȨR|8P/aP0gawyҏ$sB;{Uh_ώ.PDVmY\/ae81kjK?x/f^tB% taV䒞f[ͯ: _}W ZȆN,CiX3tbiT o"Om);UK!NE^Dx* Mɚg{P p.%kE?iM)EvX3?ᰎRJz`<,r Z{u^D5 (>2 |T!*!j6U}5pUi/fr=գsyRճ.F̧f+\1N+YCxɂBm 0Ǩϭgx}kB@=!JTY օlE`:i!ۑ8hτB\Qcd8\jhP-rCj(5DBs*cY3˟ VMzFvk"Ma"c.i>}GU=AaF1g L1ap3dmƓyLw|%).o0\|q@~fG5_uZy&$q/,(>b0OXQ*r}L<7 ':tda= S3cWT4ݐ|&cSM-}FvF#^!^oۻ [hMim[j]9Mu8L"1 TK/lC%1\ai\?߰&7߷%Ӈ-PN:u@}7`hş<3 |{O@/U7v ׿YT)nC \9?x?kfCϏ7mHBTJR-fBRDߡJEǙyʐq:h-KwKΉ*|2\l6^Q#ᨲ-==8bU_)9Cш^9EQCTR-]Ty<<'|cyRu;DE|ukK)[coC%U dnfA(U>)3s5W* ios4 #$Pw۸A; DShizuyH ,:c(lIxyF!gZ#:1KrX{e*.˾.~eKz(*qwL,x2[XjiTQ!"CF0k0VE0a:*>7t=7E-du4/Z5d 4 YXC@VUrԍ,}d*O|,H뷸cD5G6-CK? t-&?\%  }"!I +E 0 ="ݺKZ4qdl&F-LpfGiW[o!:G~ -ria *Gltt-FnW`џk @#Qi^"j}_ހbы_fryxlƏAy֖)G qELD<.猞 Pè3x67]P~]քj9F/*{HNg b)R C|/كv$i_k;.|"mFWJ0+b 0h-RhPΎ(6f??m4hoĂɐ{ V.xJE)-9>X#60A3y=:JbAK~3I/߅M~f"<6Qe`Eh#tkCNbySv,+tZ)I)^ဘ- &b\0K@,GR+D,<7Ɓ)8βR }J Vn,B$E.QlM_?.g|==ScϙQRXu(4p%QA4<4vr}ymyN/YȄDE"Ny !%Eh6ЫYC6Š"0bxsQeƈS&*56F*sI A%p@x-ur4d|I"8x߯|ћ>{$6*dOq)N Eb$h[g A4ϜK)\a[6+9<{;Ⱦ˫i^3ťo=ᆕR Y}|xEYyw`ASo."bp5XAxw1(F|*Owlr*w&K~x(JE 1E)7O%eD#X~~gwݔq,WOaKyP[s7o2 kq?|m*\cE۴-G£\Y/9JpwG3[_pܾ)4wI}tmZ8F(8%z% -D]wl$C$o6H$.mJ0緷)M|0GXG[Gb+O`"iڳ0w3w|Ҁ޼yA 5 uJc낷-4F謉ze-tϲ:}z@BEt-Jy2]KOfwj'zB٥>ؚU>xiZѰšK%PGD5mkR {3+kH Q^ć$%/\h8]#Rr P,u6ۡXB!=|u(:זfDe[&GD}N_[ Wr-y& 1ZdlĄ%b.F jY* #-AdF[AQ烡UdM DU8jeCXB%6 9+x_̐[+B:L(j\'CBࣟ#M\ 9CF46'#+DtٶO~%e"__GR n̿>bntɒe#ltqY mA3]Ǜ LQ}4ԄbjPk%٭fjq.tA>?ze5&UbFz3Gp'/\5d!_fٔu˻IA=w;&V2[sw˷DlR$u*)%Fs8?*'1 R3.1 q$Hw-WN0b|CCf-rPZ fbHl%T*R!m>J:$ $&עH!қVuH?g_wm'ൺߥ.'x=lx8WoN;/&Nm[̗b[Io)򆩩ӎIFMr}u/jeyRYV4ӷo_\9f?/iR[}^]l-IK`QgBo_yԨ%ҥE 2:l wjl;ͬGV]Կ5X&% y[ KRLEt3*Z Ҕݞ΁ ֞s8DdiK>AN>Q0V<(D #X \SF:!I18ǁu'OhCQ18ґ|ZR“O\I[Ыh5,G0t `E`!@ SiNa?6.7Wsثw,Yk jvOP2T!Zb2;(V+52b1|AHƺp#!*1o4٬2QރpbQ@D#uD"K)Ña"|’jW/d"ǟWeq\y:Ƴ0{/*(/LO9Dw `%CB׊1zN_́MXo7HU\"j{dYa')Le-*iFǥ"xCYnaj@K<'KX ʑ@ PApER87[hDd俐76XXZ6TPK0T@"R %"BvF{#T$Yhqg&ܤK;n\xV~0$"a)=ZRuȉ:JKtF51CSfaM`rx0`aWL \4X3HD5+͢ r!ۂQfTQƏR\ 3c=G2ʶ-Kƈ$ZYVpqS2.[UKΥng10 q fX6ϭ..`E3f 0ԏNԏj?__= A7еH[Dn`Qi3ٟw>ӃXN ك: fO=3]I kiˮ+" QRXf.oȅ.!S7"؈yR'$oߏ_r c%V$J&fNRwEVJJQ+\/mw Ifnjc-Ps{kBbx7%&fޫyb bt&{pan[aQXs SX0 X(b`[PXA/W at0 C;!E&w$~dЗtshnd P:XvGnfpa>Wެ/N%<ǫy{>ҋ;&1;٦\Gz!FC}a!Q|o.p4kUYMm{VkunͲW4jjs~(oGj#JIT&ki#(Y "6Ϻ|×x" زgzٸ81gC#6TeՌ++FYo[A3 :N:3\40N`I Ýw{0L\X䔒iɰ"ZęOgk왞 G=5SHf]8 ~{=R1XDc5NS]QbR#a@% -]w5ӌvӂL)(BSx3-q)RXAJweE$*naݩ&H)7PjC2H %0цptyd!D O'0HQ&XP3>D2 3 xe9G!RK1ӄ =q څHgtIpZog"U!!_&T{vWQ-ƠR1gn'a ݒ_4V!!_v)b u`s}vlpp59QD 2a_2I$dBDݿ%t`xy.2kMi- !ӕ|ZڝlCɄM#ɦIV(+ٿ/f9ip6Ցs8ŵdRD;QƓK9ǺZ{hV6E`vQ*ofXCR/EwNٞ!),rO@&@^.d+,XJ((6#p9JRdo=7S0;_F,Rڜm9q)|>(o_ +o6D41s:.յo`xo_W]w?}mf9O?͂-]uT׳kxNƒ'hh K=xa&0JˍvՍ**~6sxc\m3|X^OޙP-piOqJzMuWx* B*: , [yOm2] F'-A#ݘݬPsaWYŜ&P%2=v3w6 E ?.zSU,Eu( {>;[3B8~0vLǧcF1g9ږKNK<@%L꿴Y% ؀S=PO^ˡݗd=.HCZ1XkO Wb7AR/rsؓ8ҫXKəy<#lñ[ƪUxA3AM-k4:s}\#n:Fِ [ܖ$c(_ELz{QLrL[G!scg=#KC2VB9|v@0G֬%LmIMxWL> !+8:p~Ʒ'G0!h#2Ϟ,_+sO>#Ih *$=ͻ1Ť [I2tI2ð>7=.BTd\{!KG*.\S KEddn3:SM1:yQ%P `A+ "{/qT!G/ﶩuS1)6uOyuӓL#8!!_GjlWZIjT bD'm =inHև|"]WXTZ+ͫ`L0F,Oݪ< L .N͔UyNz8N"J['z]>;c"sPgJa9JnPYNbVz=`HhN18+a= B馑4lvaEs̾7]OfHR<`IAQh9 60JO$23CKr2K]cCqXbb8 NY.9:|9xH+2UcDV]bu7[ \ĺIy39T2ne 1Ϩ/BfxAU.dB^"gBW5(_82s ‹y!E}sFI`%gGP :#C1)JXIU)0T(屷!#([x *&sX xq؋h,#1RZ˫4c79BNeƖMT?GOhWQ](6R"rrZѝ艧Y{X /)YIEŰ1UxŊ#ul 3[pf0/r/+W9Vi.$`9聐pV:dZQ˒KŽ[F@L Eznsv {n{yػ*}&Esŕ/)C%`Aler T$ ;W6KY'O0)0E=lji.^~B_~^ jwaHn`*Q+y'ך `[*mvz5?bn~1(9K& ((0ү]96w{]>._}^x[,iysW;,g>wzJ[*p|W))- f9f*+nRna^TXd0LvķfRGzIniWmhc_F!;QhMDNů0DH-:*p,fC~˓2 ބ\rwl/O{K`+;ۧ_7ງ~ u{}wӏ߼a/ǥg|+,ԿsS߭Iw8@m+ 0/߾}gLH+0?ժ*I_p3)f\mL"ELW=N!K!ׯ َ3vE6(ƅ s@w0e<\U$lDBJo8ƕ hK=hJaʉEoXmUD3X0Jc^R):KCUB*aM;ؖ!X2ޘD?`)hSBL(v6%~׾Y\,it)* &|w5zÜL FBbMKxWҗτMA)޻ W) cR2ltJHr^gw+PONE/ۗ"(Gzܒ[c[13%"=rΒŚGs3M!#,3:-ìI&oԺ4e#pM{Sʳg e_<\+gO+ҡf^iiąr^ e-ev tzEj̓^f%&-ñiV7"x(}R  n/nǕڼ 6r"=ڼg2dٳEs̹CL|xe]yϜL:9{ϔ3HnjdKs9rGs)PYUmJ,OVxrٓaȕuـ^#1o8=Ùl-ڲ" #bh(|]jly_A`Z:'\b3}sFH߉V<ծPO;ujz)I-\v,'e֘ U. 0wWSV>of`D s<#Lm #qm8RsZB.. ͭ t IgvZNXvR|J4A&A;J{Pp-ӧ})D+"PS5If(~ ,˚$Gj$8rӫ "80}>ɠmKJNdv8K7JK+(%]Z\J&<_܂eeSg) *kW" G]K_/ѽ[#WSY3To` 4lI`k6 m8\mu^Us_ަ,.!73"VUmYCJ`vf{gnw+kx"y& .)|{P:,S5{H+$nlVwZb}ЮLxmງѝ`M͟f$b ޾.o* ڣIY vnl3A HsgH7ǟi%LFBgkzDPP(XS6AMܴ*x>ܮ1c;LjjZ nG$ce)1fApF1^r.ŗWޝ|̢MLYu}/E; gr!Fǎp"ұs81FyeQ|G_>|{lNX=ý+?q^7OͶ]O o?V3e^<$nɣGt^ l'8wnB׷a$Fbd"|(/#ٗGBdt8nEp-'Ņ=y5Z=kr'䣑Q[g1f^F=8+qϨQ?8}m~2NQivoÑ\#>eO>ĤՋ!`Ȭ=CW{h1<&\d1DbF$);`գRۇr3rxCв%Br\ ~zC,zT ~7 {=4 B!5!N[k)?&؜?櫵s)]M"oEwχg*[EA31Ƃ E(@ 3XX\e)AL(37bP:OAb󂠒A8EwVLzt8>;+%Ñ77۫tqHܬt:|l^Dt2htkNpkq^zDyb\yyb_S%c:Nlw(HXyl >mdƠWfCLj2PG -̄Bcs{2`,xWӁiQ 6UIgN3% 9vMʓ1>+LN6ҳby\s0 PR&++ 2sKYPACx\]'kwHl4i{ &``΅F u*A7bؖ9T::T6H -NdXC)g 5Y112& Z?ۻJm4đ@RNW1{AKʼn^P/1rR.޵+ٿ"ܻIbdAd2 3-{L.p%nYnuNۻ/p@ݿFy}SQAP 2#yE޲9#z%LPʌ^|}w;uUnjeۣ1hА̉=oʄLRƍ~Ka^ *6㗷tv|(f.GE;:9*.㹐}n6 @ܝXVoI.Ȕwmtj]Aa jǽ`vEUΦ+#Z鴸{ )%CѳCST@#D4:{TJ72D#-rF+UJ6 cPzc>q,hf>@wm*^⌾ͬ֡IsOqy H9CFepOcduNKk~㮹gUߕ&u=mٓ EZ*ԅyo%[,KyGD ؋10#5Ii8ǿTw5|ֲ>C7x+x6o|nx-;P2Jցu&~Kjjܟ"v_K{kyKXiA z|e`KlLA [3ZoLR최r>Tՠ]xM_‡jfe.|Uۻ~{oD+LDC~dV̚0/sN*i1@u:esN:'ʓ (ZH<)]NhІԈIy)s/,޻7!2޽J^'RJƨHm$g6߭tR5SZE 1`,Y&-)lZHmTW_{dNO [7.vy l~mWtMC;]}ޮ_}7VJ]~ވ_٪b} o  [ώ}lmO֭EsN9+y[ԊC>[=%}υk`;v|!|\,k8ǖ|2)3&;Fh dcv;+*1?YZt`֜u7AD9A~O;gN7p>Eƾ^Uu=>t:9%\{J%d@3Ye{;& D̩%SK>ͬzZE ϫ\F/?ˣԟ3EX~26Wbfu4:Odv=z7 ^bE0PZS2{72iզcށ|=cEL/Ys /e.=j+ v[0Bw&Etmuh5|q CUuH|%τH^zif}:ń撗Z^'YH֭1^G=cY΅xon(v%n(Ѻ~I7''4@rbǔ)访 wO͗ϧn^V &\2 -')fyV{{A弦*8R=So[CCɴ[t[0\!XA8I L8M)C ֆնZ`I%D !%bdV bC8zil+ۣU]ѪMY-oh ΓQg.ٰ@L0:@9fĆ 3Xӗ ia44'WoEE 3sJ)eZ*Z@)Kk0H$HWN5rYl Iʮqr:reKoU}Q+LaAj-x7wⷚH/~"jUH?.\y_k~cȐkz|+`^wl AzrCV2^F(KşեD]Z&"LaGcqͽ>?',5h_N>By!B![ޤ@;WFICZeBPUEV?w]c:QQQSPqJ9I80)>zI$C' lc&LF)0T넨Mqv{f䶷VOS)9Ƚ=/v)ws<3l~<"E+%+̋@i\Nj)Ut~ШE-'|f<қNSkcmW6Y.,FQ(e8<ڠ m4dC9Mm"6gZρ0NR;K JBz@"ӯ1V0KAS;(^Z_i!50`OVE z)Ra!,JFbTsAx 10)=?JPeb踔j &27tC\B+Pug[JjYkeZ>Ьu"r1 šl^Cړ+Š$!2|Bڠ펼d+wZ}޼Z\V]"CʧWߴzxE+;Vr +Qi!d)krJ<kJq*W'h*7}q@sM~~_<ӹϳ.9w\4wn JT蕅454K⦑Q | by:NջMק'YWj*cjJq5T&IqPr^2gf D0Y)鵫P'[&H뷪'ѫ|M"\ޖ(ktx.)(aN@]o!6Zu~j@Ge,bK`>qU6&5%`Jc 'X]Lkic ՠAkRGovR^ udb6cSMڒ{@~Ӭ֪Z׹{ﺀR!{zA|#cC$}.D?[6Ҁj` O2m^F'&k#|ynZ$Z-슉|HDK{9~K }\GF(yvyY:6&}RLc<{]U+Ǘ5MY@SZЮxq=}78͗O./^W`mK^ z@j}x]/njr?}@Wlxmkq&0aԀֹz7>[5v_NG%Q7K_*4Zwf7~ `otf[%%{ZNg9]\݇ D]D-yƐU| ø8|z"O! H;0 =@"Trhۉ\;l-&`0pwv-j%nhZǗvm/5 $0T%0;DzAU:0fôj!qs;>8xv*U&ևYrL)p l C҉LP!bICC,bfqy=]0| ) aAAcĶ @ PR H܁ f{{s?^lļXÌǘAbI(51kKdfc+ tqBqi dq; rmku:܇j5"j|f) s OOz:֊]:"N2z lneڜ U];>R>,(/y DWU0d}XJ{\,TC1XĩΐPa8ɲi )'RPJΟc AgKN YP('hҌ$U @f86lKS"82`֗aB%4K FrE:)dtha N@2q6ƒ )L"b *I%Fe BtfL*PLS`R1lRNGM y6LAo[D\}ܷi𚘏6]m o.. 8GsH$'l/gohJfW7ߏzm"(6]=.7̺ECf]-],. 1ɸ~0L6bBK n|u;z367rQrcA㙲oư?cmqa{<\ӿwdqݟ}Mynng#J6urPnz+$!\Dd*wa:H v偋>v=+E.uZy/U!!\DdJ6dlHm/F3F'q v>xqRE"1Z' 粽(t_lj9X?]MB["nf۔z|xV=~mf?6KϿFq\?Vُ2O;V|Տ;Y=ީ&s-g7_ SWƌŌK!$2*"Hlz`eB%pHm @1!v OSW@g02xcJmdB䚭* =8n6^m!ݱ?fj5; F̓nlOpprͲ{ zAP?747Abcnc(%cIcx JPa,8jtwRIԠ v0(ImذξPhOlʸ\mr3=8+#8_}C,ٶ0zAp4r]j7?nP:ڴ9r4eXNi|8 S3-GĠ;̥q¦`!!;H^so7O3F@b5bTaU\,-:^!vj95r D~A@C8rg9!GHы5!O>x/CEK|XQ!|y":Ϩz K,L7unmH+2Ee z}d0Jid ns[\n^.?\ ȁL'wbQz<| ɝxz!"C+S_ q3D"zPHLDU*챟70K327KY;Sw=O\0l1]PɤJM!mۓg,@I$U!i#Uf qlHƌ0l>mOLR)d?A0KlC0h]MoS$ɉ L=|k{PO+]2N2cHn;6dB\iy A"& Tv5NyZ]{p+#aZ$S^9DCYS:Lʆsz 42GOJ-$2Ʃ]Կ2ɸ+/zSL lyH-̹ Vn!Fώry|(> .-k <>3T aCX@C2%=MY\q)R릭*OZn;f_Ӄ?k@B߸{s^'D;;5격yYՠ,0a6Qb<>[8;'S͗/pWϽ^6f_;ˉw4a1 Kͬ9!HLKF4 KdA"hç۫[YWʅx`dkܖn4C]RP $S ȌPf,Ŕ0Kfi,̧Trtn47J cUd #ill$1]dR@heQ!ʉ3aG8y5'_~rK%o(IޔhFxUI Ç#||[.2_7緍Ml혦P3A[ǎ~\"`=X{_i\R[|cn/㐚>\!Xd$yq@}s6ˮ6jq|s 0h F\ nrm9@N|y?Lέiș"6UMw-CED67x_厼fI4M qFDHR r ZXo {Ϩ3CL$fp8lCB^kN0ԢڸdvulW}_\ 3ۣE}QG ({ I): umH+2%K\-0*+7%}Z Sv tDil T>}|^)1p gwɆ+&% /R`=!v!:s.@B2}<d9Q˧ăf$8mƇ9|C01 'BshI )AThЪ0 LKK$!nRZAw\J$F8X׌ӉC PN$ފ,O+n*)0("ƑH9 AkQ32G-gA6W938`m)Lbb'OI+8?'N&=n"b bE%(8[#2S|LA2v>E(Nz˩ȟIe6X.4;!pN'Bg5?YTf^6%wpe5* P5~dn,e=x~|q{#͝FМΆBa4w&c6 2i) {{M) hsS ⁑xH,j!rCGI^Md@! `a2nQjCTZ2 [%os@r #Eɧdl+ ërPHhB!N+qFE藹&~N 3 $ЋO0%tZE/-,~U,V*fJ!np9s#t7Jh!Q-M̈KJM?ysRE-gmQttTO%V!OXUseLzݦY6^w:TG!Ǒ]I)| yIFcGE4?obB[I~(Bp"itV\j1&}X & Rhۀq{&вS"ѿ|ގM.4.!ځ^%̲<.pMK=ݽBciGqϓii鬓ў߽p1r6|> ;7\.|:oǂ-zumf61v0hRJJQS>WIO^|7/܋/(t aLǓ)IMcRH?K6L&)O.&/,̩suY˩FMWY-4r`Ad|wNi`3TH}BwJJ'UܠC,jdJK/ރkڃkY"t]{ld ]E2T#$eɢ_N]4ZR(@d`m Y G[(U^yaFn:#CtګN @839%{8]<"6yxՈ։aOǥUYVTAɡ PTeu'1"بHTm]\2CSƥ@N =mTvG T06v*9#]K{SئeX auu'KZ&]zBRGJ-.tcFP'ѣ {gB 톲]Iә9 aho1$uց3g2Xk|șZƌrC ǫC'*)G8pL!ĈRM]:NV.o>"7jLI\67.T:9րi$ݘE +4'XcLZVxt4yJ(ک^9E53hG4Քu2J2Fr42,85Z\#J^Gp4Zt&5"ih"/E DtLNP(P*fLsLf7Nsho^_}\#ɘognL-|*2Wǣ./n}![u-dЩ [h [HqVHViĆ!^ӤFPR)\ ZF*+փZ&JCmJ_Pn\DFRkI Bj녤Z%`(!*7kN*r$2a%Ad! |%TN#%$sQؑ*S󧓳*i=Q ]HHRhxfy8xD:pB%:pϠ8Ixq!Ψ:M`w6Lg vU*"WZjEԪ<ܨ .l)T3Ro:eJPңl=jVgۙjJj/FjdQY#X%RZ85⭐f^o|~Yw'K#Ew[Y+6R>mӛ``N=GdhzDlUqK8':Smp㹓l<7YswTӠ'^ċ:Տhޞȃ}.+d!1,NTs?2/ŮUrc^`A+ x_,< DVz@6tDrOmn0WpcΌUzϳ@͏޶Nud ==jLG-~} ~o7tsu^kh!ZC}Sَ1؝RnaY{;re"w OwZ"Պg_iNaо)qbysyw0ś0JG9E͵͗F!NF&kQ$jj ~;zŔcL(z;Wgwsyڊ͇3NC^e4w3^/$=~28 gͱj}X.dO`?{PBD_;2fX Ms|ڮk&Eky6r:=׳݆J "Ĝv >@X5a}61>\ˮkxq n mIctL:BZw'_rWT|)\ ^5EpODcDB"Qe7?K[BT|V@ `>5c0RQWlK )KF4.j.k~L ]bAZ5n3h֌`2~˚`j0}trFB*.i2!,sқyfKhA^ּv!G0kńhXb 娖ܘЬcV2LQJΫRCnqA\ãT!h%I1²6yI-˚B'FN&tX%Z+u@@6 EzBN>wR[|2;,+j|'&2;-YX2ʌE; wڗfa(35>l_avYX2ʼ*fIf!gC^!~Q|4i=ᯐ  p䤘փ/d샣ʤw~0SId샣:Gw?/j2(-(=&LW-\0hyl* <`MୠB[U wH0yV\[q+!iNz5ϣ3:_H2.KZ c$`:RjAMSL&(9 bjdk^dvv݆|7]0m䖁 ہûg{^|Q?; {d绸^!j|k'$=BVP\+8:gѸ{17>ҋMVKrZ4zd ,2pEHݘli+QؠVH ,ECdFqPA 8%C38(+,?{F_:C(ȇE]n\f6vd2߯Z~mf[c,֏`X) j%yv]&gx|a Kݢ~+ BmWthCe%L|HLE>9N׳/A~cO0ZrOh,1Fre ’]!QǸ B$SCɯ*gOb2?O=P4[[H6u]d(dxŴ{~]VAY~eVs>, ms&d1F)Bc*I#y("?hJP_AG K[VPĴ4,朹"$3PvodZuБz9/gerZb#*d+xsݻv(Bh}A~Kd( 1}\$!KRس7upXY2k ͦT "Wgq.6yqHb..'Hcl|* jT2/oɼ L'8脞WzՌ˭Ԇ"emh* 2_Wn>3YmpB ಽUvc8lWF6!hxG4&b`>4aYMj;m4KroV#1:m4^է*s޸rjA|M\4s^I_N}>kOlrtuOG =@RKx>Y'կӷ܅X)]hF$8~ E=&>j3V #.+G?ITIox̰ t 0UXKmWSsrv+9`ZۧC"ĖS)&P~taO'9S~qIau:3G*1'%?* 6SY}&u7# Z/7GSo 5_|?/qeN476QL2?eoބ΁C5>8!GӅzQ, ƿ{ F; zgvwD)k&`WAF[+X;c&U."T5S =~!+n#M {lJURZ YrV lk@ uqˠYm1&ZlHW lXyJgeTLj^5*y6ѓf8/ wZ]^A;% _Fo|EN핫'ZꂹVBBZUnvs/yz? cMU"q/NO߾;8 obV:=?y/߫8ׯqh;VVY؜zg~\ ђ1v\ ǓKl;#wa̡s0ma[zb@irOa=Ay;KXp @=f6;`zOgdșݾȁ1\z*C .v_k8Oޝ F'C%PuYÉ*pϒ/"g5(kt8턋akp""V;%+il_`0Qb4cp S>.jmg-|.\}=-E!LɆo3b !fM[_ mJ9]QyQׂߨX_8KH6}P޹}iFC0c!DV#$&^ o|p>8CfO{H!O RX}O4Ǿ#ӟVciQQL!aJ6PH>bJ cقvqvfI>_N !MOh*vtϻGANIk*G])2ֲ迏~%oo[%~>O=^Eh \  Ԏޣ֑nhR< LG-)+)N˧?7G"rqq4yuqOs$V'nWN$=|\ZJ#,rMŲpQ:kU$^nr9Igcknto˹MO+|s-G_˫ r?+Lr'_9_9㦿r0Wgd 9YYȂQ:Ms[2Bqz>PfHgZOza.5ꔥ+$G#r>cj mzq2_yU[-.ݪ~[Տ֖qwڒc$5t!0ڒlx2g0EB+& ^j[f h*cc,*ȘQ(xƖdض[ޗģz嘫mze*Nbߊ&!R6'\YU`nKiV1d=G?;=! U1Lo/(RN'g t͘ 6K4WeՉH Ӯh/NJ6hjӋDچrA +EŜITsW 8 Dh#P5+d0BIRY{N9řP)r6'QJEg[]ƌrVJm9D%ȕgg茈Bg}2+Hr]Cc!D~;=jG#rrSU$}椑t [9j$!J&-e" 'X1 { Hc)ppB5XeYyE.N->)s(ɺ)gC*FM{/z9IT'DzF!`fAe!"@:raJxlV WBBl75a50M"(ZIL9 )> 2hVF]8u$,Y̮ :1Sj|}pߋ A$;>u4X~|9#jnn1I/+TIyeB_/?t7 @w~~xb4YϾ3LݼgFMۣI..o7G)&i~]\uyz;7ϟE dZoOW`QO/゗%'찀4WJ5䏵M+2p gF+!+X8=!%~EiZ^[A˳|7)e*)c {љG)h97ST7cuIowsRںʏ,{PY<-NytJ mI2ú`~bӳt|'$53J;MNfDԄg)=Mӛ2n]k7ݲ;IUsA<`-؞vkێh-$Qح]KVugߪVA JG,FAn8aICԌ2, B[2,I1:F$ɺ1v *".UOGZHt^ʽ\R>B>Kp,@oAJ,u2ԓ9Y kJ֓i<#dj$jézR^)RKXz2/E򡐾uehbOm RI u t1H1>EHDUS#VvڰERXZड़3 ֊>+4==?KR#sBvWH2Va>JiyIX"yt4ҦOfELBǤa=#+c=sAю%p ,ʁ܏j}K}/w: .q} ;h}㺐H~K R YC+_xʊgR<ąt.p tm1D2bOF+f'ē%``3L Qp)mb1ԺLe_iCw ϿY%ߦocz%ta1|jr i{0!Jc6;E\AHY@g&W$:OvO!$IJOOVZ%ZxK2 1fb>*ȵDލ}vzm-yG%::^:k1c9cZ(tch׼k7aCP N z6&M+ ,JX',9̓@>9 V|b 2Ӂx*Ŕ]e0SO~q&HdrYa6q&8"&_iZ va6ӗ]*]ꊜD\ |P'1tLiOx&?(N_"' XZZrڅ,ƐX-"5(Ļf>z;#8N[-ω 1Yګ]7f^5Y$XR-p@ %ǭ`/T4|Vct[u/"[`8↶L = e]X fAD^< *{q;iż뽻m;?{W㶱/k_ 403&wbd{ UNR[R;qs%j/Ht6n˩T,R[Vî7nfEOA>N>uK}޺\o{or1CKB~uZ1=3X)Lc F8Xq#9 'G)Y OM\- !ђ!(U(%j U [%JIc؞Y.%Ẉ2=U;je,sE@9][ >NCERZªxra4 >\{H3:oo3z;;jɒ`c: 2q X3db> j9 Os  ߎۯ]u8x;z~0^|t`7_ 7CK:z+cCMms_n8*5sT)8GA^uFHBIiz \z: :y|oQ:66ɭKG'REю)'1XG_ػk{ĭ},۹uP:Һ1tZSDPkEmLƃ$/\C_ۏƾMѯ7Q"I0T&of4I+K FQF(/G ]pz`S[G =necLR^w8olA*I oH#" r/煉2rfc+ w7's4La3(,+篃l~כWA;߾^xw,(zO0w&xZ>q+^M,Dl[m6C񋈣u#?ǭV+j§hԜ?ܕn-*rGSAIk->5+P1C^S%'\ )zk] ܷZI| WPJa@^\ye:xZ+y٠__"rl83\zBߖ"%<=+Vdx #l ~lT);>.)J惯#t6{x?~ư).0%!K3$~&'Oe~Pc͎b]ZTÉ fM5 2D9DLRd&b74X|/ϓC0CPXT&N&9Ԃ2a~:8X) {NJHNjhG#nre1e"I{Gĉ)}T&Ό \SzZT쓏`5Tv[0'AL9VF_|R[z V{Seq~࿮`&h|GX3W27B!n\]iS1Kli!L H&Oe DG;iUbQ3~6w=_e}LgբAS }FOEدSe.b5s>DJͷt#.vtLX_HǤJ P"[[{]rpDCq=+ZhҐT廥4+!HtW ґAn"Gɿ Ӑ+ٽƁbKs^y`T8=ԲHŭQ=P+: _ mX2r58ԡ*Q h-͈S.K2; HKRr 9G*ʕc4grj%J8g AYpn<RJTtA5Z^ٺz\\ٸJD?CI̪Bm^m6GCV=R%ڪ'~Fs\P skqrHxyZsT`L\ .+a%inFxmK ^2m*fzT`6H&:Z qj|6&,?m[S:NoGշ~Anノ<\̟,ՃysZ|" C޼MA&O!x׀'[ǧHNփC"矾:/BX=wĔ1f0JgoYtG' H ͘,1yqjx&EH/ca9fdN"&862ϰ;Bf`Q^Pi)(!9qȁ͸?hsl>}x_ =#r~X Qz;A=/7q x W`Jhg::OSTb$NzO-N^N KUɡƂ35{I8[֬(lP4dieM+Pk .qӢAP'jD=sUrFݡRNXRxM͟Fs"rreei̇izq!3F#g3aJR b8u`aA*)'"V)w9q5(U kCqvJqFi$x)oҧr[$-( H>ưv d|_V m'[W._b@B}Vr&L9,L_JIn"}! iyE"4~e0yFa 'd] |utbxrw =1֔sGr0m4ayIv\-&EF+nFz6"ˇe[ .8>*ߺn-$af' n ZF5it(NƢچVPL:?(S#u2B*ɜ [∢ 9^d4-{OFB7t(ės{h:7Ee&)FĒs{E2mG?J]3sҴŌRznBQ9gǂX,xoC\0gs9͘FM^~J1wU\rٵ /T}{fHյ9&[^gg'B_Txxi(ytd] ^{jUԜBnI^4ejZO~q2`+/Xמn}ݒ;z\#d^궃tLJAIWHLlRqXBRk+-f흓V IQƭ.3T[c}F;VIљo.PN+:f̟ ?g)!ii2%2RTnD6YkmX=Iܢhy(ʖ$Iw%ʒlR|>I$w3;;;L!;cK,O0~%E4"&*hw#tnIS 6Fh9侷)ÔLcwx|C?AA1}tgؙQ_dE.p28|>{/*73|8-` h> ~Nab95օϏ͕p6]G'0[3 N/?]L]_>$?0¨.<_gR܀/g§,dh4{)KG4___?&ҏvqeϿz4O׾ES7|ZW?OEvO/S bu2r͙Ii]Nm"Fs;n6Z<#R K̔l wŒ/ n0{ӻ jö\BB&  xVgui=W{*ɓߗs'pv OWeIgςn345z{>~7.ma/Ow}zUC9ϩK9&AbN¬D7r TƠ5͙Li,w1pDܫ7D:T䭄ܺuZn]\R+E /V;&Z=Lʵ><)؅}P5ƴ2&{ȣ" Ӿ- &3Q夈VWumG9NnLH9ܥob:B~qwB5J1yi:= ɱ#k߆,$l/}~SQxS2,qmk` `ĉ?p;%W~γaTi(vN*iaqAA$"oy&Wg65cB:]{]Ėֽڂoߪ$ҢM !}3~f=)yN'{Ǽe1o91Hۃ4;2WwjVX:5+{njܱS3bvW ;d['nSfv\v8 l? 6I#(eoRoc0tQlkjܛY;VCY+ĄĄE]Zַ%.lC-K@ j!lj)y9ؙ# 6Ue3`\k\AS2E7coX-RDo-B >K(8IzWY-,7o: bb}F?I&sɜd21y>L$? GkSIjR18cHp,p- W}Q/Y18W?ݘNttë^^9"Fܳvq$hg9fJ~O ݰ ݰy7 ^k/ hVt\ZK)U ;(>R5iFQ ,KD`-6Xv"v^cHC\qlJEX0À1)a.j J\TTrT7unKTrw5GXo6fo\kUXPʶ +}_z^*g;VB䬶/@uARU&`;X rmլ %JO` Ovv\M6G8jf(lKLsWGҠ'Z#'ԦlEkSZ msN߁.0's1*wû3n4Ȏ` H]S[!Iln wC1.BKIsl;OZyd1@% ]ao^\yW żXLZNݨ+Eauj>Z?׺$` %](CB/߬-آGOUZë{G_("B/rhZxE}2>cmJ}ƊS (]vY*UO9uҊs4E)Y&}w~1Y=gzQaYSL)- !Ui) *BX=@7 eŀD'pk!VcHM0hV3UERcQ`pBEJİ֢ oBR1uS2 1cI%8r#vbzߓM58Efӓ2HV1AXJMjwrhRE84QXFO0Yx '&ؠMWt 19r_VP@OAN(`u ;6gW ,P &AW5 z a5ej3#J)rD)H$m:c(%\I5q"pS 1f^3`WbYI󑙅K<kzfcX,9oK)MSӳO0I;@"M|i@۫E zqur@}Rz_=>y2ͧ,~_A/Lg?-XGsݽa`#$ZJӃ,C@nx(_)+0=@R,`Lu(A%Ǚ BB`0 /`xz{dBSPWΕwGv-[+`>z)$Oo%y.a(o`ÕebċNJh 0 6_l$ _YLwq-6:Rr5PbS%pR8"֙}i~W&&廾IbvMyde+90u09T"K>R$wjF\B=-ՈAelZ::_i@ɕme#>SWE'Ԡmƒ;QQ'FTm4_GEFv+eF"!T12煰4vJڡgK<erIo_(_´8RGg):dn?|G/S7^2_6:^;83MxS^ ,eމtrMJ}XDjT*נÝ>6@ GVl * Z@LVyew;ks>d2u b.tN/gØ0:{iyɕ˒Faz'gُQ2|:#߃4K$~S7t~ȸ#y >܏AlKp:g)_=[_G/Kf.h}+[>8=@/IS!_9V⩜hWLnK7 u[U Nwn45VyE[h%E@uաh]6[U Nwns4mQUhQV|,Z+ݺttAUŠT}G?H̹G6-ݪGnupW΢xJ(+P`Dc013oa2L>EΟGpˣYu OU\`gʾ*`k\yhr9?<,)`;$+dY{ǔ9Ř|{΢O0nۙ;2x6BDH$"];?úMf)E<_~qw#\ d(rXmj!ס˘lݯWj_ rbo-o+;P׸!Կk`{ufɤybm+98^͙`-e/7ms QկX U,Z8-_@)\7~SW!V_ffX[VDal1]`W%ێGDgxr>>o؋Jr^fr< q']e #UJ5"OeqX ]/O f%/݊9Ht+")_%,8*%V%>3k HO6`e!lZWU-u0!L=(c'VzuHR(+"j >ĺX$4ӄAoQ18:b-b}P8sHrRXKI1` ]G D 1  (ň\Q`2MCOqXENk% #xRsR-rPה8P%q+81H}]8E%J%'zVU =^:f0jz&ƅfxO=_%p}c\'% sF"̓r+`#4 l! NEw"E+^PK,PxG4^ td'UbN'[f8a69V%kX4 J[ y Hea܃4-4=\"N߹L H-շJ:R۪o@}AP]fAsq>@2dۊV < TA``<{!G9$܃݌=A xDLJ%u*)s"r9CTD`婲{u,L@f@߶9i$;YèCa3/DK1`쌧5CZ6wlvwm_eѻy? CѤwAv?9&Fd#~PMKw|#= .gh[([-lqxf ."9[>}{gӑպ?\{,qF)l:f[>nsL<-<"pۓZL$ ϟ1\q"`TH \4OQ:%syVyLDxT)'I<=SqW>(S=5geyx^dJKjFKRI&*B ͘.\0 sŊ mU1l*RAe$B97c1 УٷDV6{eOً1ZfɈ8qc܇{B|i.XUwo} wD3L@#Ǭl3azc@ǧELTS4&.}H-F=RFxwϲǫyT`=fl['']f0MZq|49-2@¸qt}!5)MI)['8)QLdA~i;tDߔĨR zL9 @!aT葎 t#wْ>rw\ΑCYْr>c@0L=OA$&C]uw=yخ0$R R}J3(&T]iD ~h^EA1! őd0$w<-YEIIZq qk W\F[jD%datnZ-K$! $N{wUH4Hj`}뻻aX2 ST9T(-?r1τUUln^tZ0O%d4d>Ve:1#=i8UV0SfVbSⲐZ#^XB H.J0'ˊI>kq$1`cQi*UBʟ*2+@UiKdK)brYPP0a%TjդIYy¬\:ըuS>O8y%;tHd;9 WSu0q_GƇAPQ|G^ZOE=QS.=a/fgzCRnz"[c\ KŲΘ46+lUd%ieOy`#[ QtACVW̅AJ"\ -HmD`jݪ!V$þ*J5ErL_&29N#$~B)f29z 8RDS"FԻL4= !yXnC%%2z$/vS 4*1N>afH/`~%82'ۄ[6 v=5] )rXK S\:Mvrq=ͭYyf !M=SoԼJ<Ưg@$aYΰ5)NeW.ЁއOAvćИ(4Ƨ^$G(Td]sruwr.[|\ Gc8zِo.?w#;BtxQ\Χ*iMc4T4ؐB4u]@xǫl[{l?=ƫl%^%QqзI(%'}wS'aT2NG )Z1Q]8}T2#3ؘ):,RVlwyL31ѳwprv  5sK_:Xgk0<޿7aՠ8`[aY@p'8lV};ˍu&JJUV0mOb&Jg': )YX.Ub]@) 4*XEU KKxhUX b.a5j6$E^vZHI;gl/a`*(ޮ"HopB{_9%sfs3.@t g Qer- 9qKDHi &uչ5XU2Q#XkE &g%BJȪO"M9"}< M` aVY#ViD•Pa 1JCZS Pb<{JȧQMY/N WO5у: 0|,|XX81H4k^sx:'FANjv^N5ƼQb"֫+X.c\]5|6ywOWMmw_üZJ.՗omu9.NW3ؑv('|yWN]9ͻjj `Yџ֋9508.'y^̬X5]z> Rꩦ)*M1C.O|I 4:!ҔB)cVc16WM/je9^>.01B@%l0@W03tU!Ы*fwxH*E)w1y[a~gwr)q̠0-dɩCfŽx0KXDJضNN`}%0*8JMc;A1!3eP(=WA{sĕʞup:%-su(~`Y-a%U`܊BR^JIKYgDD ͑8RsLXGh'R yܟ6r \G0丹-֓i&^oVj육'.],sn[XV/&\w8,sV~6ͬz10]Nv_ľc+Х{xW0ͮDi:|/UD^VdyxD>ӿڙzbڳU%{k *W=iȟ\EtJn֭;uGubbθbQX#̺{Z>4O{:=uJO%DSnī^RǷAP͟_ǻ~fKY)E\Hs4ߡѩh'سx$Em1O+Vn`I4##qn>j5?p ”"DGkkt\/1FcDtu5G1b5Jbl}_;)_8kw5v|ݶNN`=90j5G0::o nbה BdĎw;0s|D#q>oF Q깟Qp^hRJTe! ht!D^ъYKK+-2Ei\NEjxH|AMr_ wwݻI4L!>}[m}oSҽڗN6<&o)Foo f~(a\h]oE,ZEzv:^sT<ެ^ƂVoΧf,=j!QJjr/rr^^d 6O.f$쎏wb#KgF 5<=PCRpu7Rv܈ŴEv=Fv,-TP>Mtr0C3f`T%bZHCGf%s>J(6b#-d4ЍdBCLQG2>2Qt7o=QRrIFKƳbSli>EɑwI" S`J?]z!r@U0&8F)u!@E8ֿܶ:GfW x1Bg࿚ZDz,4ˇYB>> |߯_=Mjm.yG畣yj,i)d@10UJ+c*d+Dtĝ inU%I.W.KqE]Nvb{s.eIfg._Y"Z'NJÁ6|lNKƛ!t#փ JX1Lq^IA k%KJ@.IU`ưa6 97#bI 0Y.vɪ^\t4j~S0+w97_ܝMnE}ٶͦk=)77ЬxKȫw_4G|~Y"6 yFWM~ 0z07)?.?]*,v(ϡM6G;oOn^S 6ĐDgʚ_Qi3%zp8/PTfuT u"R,Jt@֗<¬OǺAOW&*xgFؑJɍ W3ȷN$TIzM^ M[$krfX{vM3d}qͪ6 Rs^)7ߗ]'9p%VCC"֏Cإf80)Ӭs,d4+ 2ELfYL\2ޓl+V8e!q+2B1'Ȓskx{L?>[II 0K`'CeXShYoM^āGш/F-z U)sZȆ.pG^2e%qӥye6ؐ d =l).(}>pv;.RɶS=YOJ27y<@pZ? Bv uo|-+3j%nj{k sUFf6S%!0d\pr~ErMer.Scvu;:#LdGBt \z D"ՙ2dk3P2߲:aՏHn3f#P]~O/!r>I\P3#c7Az61j&XDIPG#`J3 B@5=oE,ƴrdBU(duY$vsreңɄIU+;kxAIV{;:;PQ1X ^I_ď;x!X%&/յ4k^GM |b2J?K{.8`BtyU5Iӳ=<k0GPJ>qK1'm>nMp۞[Qm27 ǍQI tl`N?F {,4hzĦ s,y$>j |\D&%u=P^)°=H'Y2FVy)|&XanK;;b  H"v]ǵЂ!ZؽӼj; ھ8:S Z=ऎJY8kHꭤhIIі:r+4 k'xsϼ!DZha/K7Tշz./o.\߽=O9} ~9+}f(HA\1c!(=&A[Y6݉֍eJl]^V5z[/BUVS qx%IUvaЂ(0k꒕eRnJjuE%4_ML;Rdgż(@ zy`9%ZA12) `gAV^b}%)7lBH/j_s J ==? q.dV߉wo~~Zy`:I@^:?L`x$/;oN+LgpwuE$ _'~*vM~nnH`m$dO>> -h3d["gA!}ϬϗRLbVh } pPkʔ @,N. .R\'F`9)ud b5%ȝ)W'?Oiizz)ZMUp%gBmel$b\} h.oyNɑGfP[`P6|' a\wK&9r W -C4jɷ9^` 9id:+"U 86@kkc##EbԚ9dcI/d'kEde%Z!C۲$Di##^*4!;nB4̒6i\2#RL _!(B $Y :|o߾]g;P %B&R&5ޏ'SW&,sѿs]EPP) 1HQ,&Am2Ka;WPBqFxyJB9UԦ@}fƬa13A%^xC G)8}C;ڰ L¯V >vuqk jwjO[f⇿lj g#ĦK4PL.ݴf(A-ɏ$#_zm)&IJVf(p-lښ,AU zХ)-BΒA>XQP1m`=v)hUoNK<\њ*Tt,PXahV6怰 ITnxIҫ-‘dk;]*'BR C++y~7<†dJq (S,NsQu9"xu@IM|6l;3ңre<3$oD1bm OD4b·k@3 d.$=zWw ןf_29I|11 ʛgPc @Gp[(!繒P=5Flc4]S² s<@2FBnv5Ld ᳟B>y{MZ@x7 t?~ n3qzf_|ws:g'>^,>=Y|KK~uGOc=jgMT:QE{@}X9sM(oK>_Ju@tR^K0DÛD=#*ݪ 58|O/d쁣`gX}fpU'ưKw-#֪-´G5fFq?7; 0n(Hٹ  RJo0hV5n]tEC!Jڻӌ#]QyMq#VRA3i%NBXC8NÚWZ>*~3O5C"pųjE]52!2\-yDi[l!B fqKUq|P%dtA#x=je- h@ɪ'oZ_'$r^m߭Pm֧<ǹCU2Wg6NbE'Ҥx[mE'};[TO]y Ioz7E{.$s,(/&otYlwcIBR-W>#N23 Ef )).~Br>eͷw`8~l-Oi8LBG.Y=]jy{ZuPsiŤs0<Ҍǣ8mRA;?5I,ˍFnF !$k;t sƠt߬tF C;Q?3H5s 0l lre!1IdNb޵6r+"%Ȟmyf&gsf$/ 6/cʲ#ɓA[ԺV+`b[R,~U,ɺ}V]ĂPpb@ !hkX7&d;v&@]?/Fe.`LxK\XrWw7u,!}9.E?_b?9o;JDi͈^lpkQ;}N9" e'Hs8V&Xq٘VA4eC%s-CSՒJɺu y*п{rZfZsϷWEbbEǃY)dE7Etu]Mwe4kXzH;oaBI(rK:ˈѹft Qm8F_8/ūR;Nb+QF9O3}%w~꥓]EE5LIV*Kʸ\Qz`*jZ*\԰<0K  ZZ"( )OQ@n- S%ԥWQEPԴNOW}{MHX8θ f*d256s dɓ['1W77p,\1n46@@=a¤V!KsE=ָ|:P^Q,?dKM 'з!2>l+GY$hȹrRT|NԈ#s ӎPiX,k `'1oyG@K)ɅGr&%iՄxQ+TH?eq䷠Rqm$ 4Р5$:HlsLgP,2P92/M„qY}=&zO}wܘźftx2WW7fof]ŦRYW h=>qs| ^VFԮm+%Y4<2(mF(mDDi += .cXQ dַ*~f9(O+Ve% HFBρdCTeҗU@[ 6G TYm%Q :#u I>XT*y*@vSc0dܧǔ+`avb 8ȟAHnpxiqrvRb&v#8Md،ěx̼NI ӌ5I`)1;CG34+Xh 66;+9 vP+p.YBizWWf<%zuuuJVN2kOw1q7 nPm1l+8{瞧wy,?^ _ !矿%o߼|~E ra*qk+~C6 "2m?+0FLV?'%s!I=f][70ɷ3nG{Vr)^l+&q!=k RpwM%QS֤MRF BJ|__;Eiw lH3"FB\Se)grK%=jt&( %VMI\?*m=u/ rHc\ ]]DFzlGʮ7iL9nPz ɩ(Br:$RZUKP^o͞ {ѴONY2fVmZᯯ{W{^t|{?xN4ЫMϾ hCu,[Y]윣R/G)EuiΕSncbYWGb֪Aikwk4B4xt;fq4`,tǬ9 I284ɘ4k7:z)k6PRm4q.4=[:j堭r5%rg#v Gskĺwƹs+]m8g_R<27r~ 2z$})l.*cZf:G")L̚Wo- ɹ`JVap83ာR[srЁxU$]i}Lu T0lu:PN4kdjAx$k#99f kb{/*a`r^ .5D$_Ir%Eq^x1VMRW(әۨCFizbNHɂep3iTnF+;[׎IKYϤx.rM8x%G1JDKcqx$Wx%Md( P5ٴՉ%JmMTW?2n n) Jn6wx1Lx^y3nuL$uO/ DQ-0J4 h3ƣzWZe KU fm{lh`C@AjGS H;,`yǹyNr,^[A`UsQ{ƽcF:e$gB\ssM5cĘ(XJ/Ţ9Z_ FՇv{?~LQc~Za;p ?=nbѷᄑJI,wN;-HӒIGѥ`"vA^7U4y//UU!fa&a>Y´!^p3ְ+W&$9P?} ]m }E$ky$㎃L)m5W_8pTzK݅&[t/<ӧg~Nnf~N^8%.IgyOh ʕZJ7^a*ffV|:^:>}C4}|o\{&baM8 'քÓhB1^-^tG< Ej}FphQKw;&!8}4~&(qRsgCFAnr+D ]z)rW+1tɧqE=(F3E׈i.h{X᥃RDvyL\7'](zE%2\HnoОjb>&r&X9+=&.zeχ>= s_R$V>:7Q%!BpRV##*Ȇ*B% Jc&Mcr,p9 ZČ&j0) hjt.-Vh_DS2h%h)EZ(?$ql;u2#j0g_5u?-h. c }Lk3g?GyZ uW,8 -k]T4um=OLZ&qk!)ֹb@`\p `cxL0JNE0Z.iol{6BZX%fo(xC6 2s9xMa(jEjQpX/i/E%LkTjp3U5*eSRgn LJ,7B/znt?=(Ưzy|[$r~l}|i~>8w9f_ohP"ml=q?tf <%HZVl );yŲ#H6iCLE)>IJ5I.)2E8;SkSM~LU% ]P©"}O(z1E#oM PVTuH.I2Ur<~Z9r1G Z9"(&\ք:gET7O54qTSQ>g`E@1ؘ'@U^ps{gNhSMrwcqAS8Szx3{]l^d&n>̺͠;J!^7<#`ж{ʀ3I욷\%ʾyAV#{|`4$h>a9?&4t|g)i)dd'%RƜ&H9]H]`2q''!ðVU ?Ox 0IAtJn\&eypk5Ό \8f$ *gEbR>&u/=8,S {Fa٩A2I@xNʀ]nT`x'h1Oobϒ_Ex;;-B{(g|p=([xusdVgÉ ͠Hϗ@2W6A2帬FM7j߿-g\ nZc|c~d3\ʃ3T,Lc2rƝ7Ê⫴_7P"qGW%tlSFɭƄ=]}_zq?ҕ"۳t 8ƹ'=rUŮċ-};% RAr2OIE{Y*B]tiukiF._!UG!g7jlGzMvSXmzSln@\^d/d?(J^Om/b+oLl/MshX/}3r|p4)<~Q9Y2*͓QTzj) A<].:jSRRC6[gϓ[N"׾\,u%X1L7_1d'; 1c,<*qL;Yc&=Nn,L='u-\(yJ eb"TVc(Jޜ"TIP9]w;-*6M֌Dp ;'rL α"+ 13K!&s};_XUzHP9"/}5^W%.ipny|0 JX}Yjʣ88iе}{MUNXBg682#%"Gɝmv,_47k5l:v fMSFSV(os:JS`DFzϩb}1WKLWeb݃W \oc5;xszO5D},zmFX= DkQVo*;)L7`$JEw DU :d , -yf%xԎ;5JGjU~Vhg]{Δυ'j?}fBh%lBrNh/a[8}AKCiڕ{/ xNe(K0W҆c};kr\\{6ǹWp _|<de ^śx3o7y:a` \p31N B`,(ѽܻ)C-6&,RA.vtۛf$݌g_ONae%w~zܰk&)V̎愵TC*=r VfCzEEyvW"1ٻ8n%W=A22YГȮ'ٗM hW]ogiif$u1G_E q|`Zx5)!^1&X@(J^틪^㻻A9M-ޯYs8_zDWr7heEKJ 2|*R,:*'OY7("@C:2TR!&|jQKmJ4#uH:Y ^xØ6p@ )W`ː啀Q jS@]v&>?';g Db1|0 z{z=!~ nt8Sa77lxw~:>ũҞ3Y;+~>$W nNS:k~ΉvQ;|>;糀dI*F/wzpѥfCq8i4_VQ`F^FyI ;^Jqd|<>&;FDh\| L'3rTÈ43+aO6fVFkV*B'9#eOO 7Ҋ^q&(Tڭ)b+ P!ě'۔6StHoWW硜.VuKMVŝvo4=NhqNRl8ϟv<m'FWb p.o1;V! 2A`d<헸)G Ft,.r)Wx*땫$>hج'ڊ.w]v17G*S!"tJήP-ThPjZ*܊eLR-ʕb%v2 bg~8LşhҜYi 5R$'%HS@48N`#WRB:ֈ0@q{8>MZɤ]Uuc$?xFy F.D^pO$CfƒKRhP)BxYƢ採|9zH 0ti ]քF">F:S٦\)B8@7#d$H pI;K]¦U DK ZW U_z_JvfP#2fDz"VeB%}RAu B#kqZ/ͤ혟F*ZTSѻPCwWU@)۹ Zc#*d2 <5:a>Nz%>_c{}hkIJZ%uSQnϴ=i+gm~FߝYɹ$߯gS wF,;IʉFҋYK3 =N;hC {֊:V*I TP&: pg(%+tDnDY5Z3r>9h/ /8ѱGHgs?g<ȇ!o/=> s"̆<_gVsCXR͍8V?/ǗWوEgͮ<@]ߥۧl7_w۫;z-'4KUznU6:FwLbPb:vnpgyꎖz>,䅛hM韜x|c` ޭө}G0ԩϻz>,䅛hM_Oz7[-%S.m{$J.-ݪ;Zn[ZK=bNgzDt( 0%`l}7O@Iki|ٱG``Eدa.*mYtWr<: "nF,v7Dka& 8rwNt巊+3w^MiBAB羙? [|g6-{~=Y5%vb;{?d.s|/gƟ뫷{ON)ӹ8 g*h/u:V hTjD%h%Yc,z\[~쿹xs՛O|EO2\B5EJ|UTjjb-ECaH=?.Y:1´kH43$*Eg@ V*)߃fe LGyj?SQڏS{0p*R \' )&H;i0Ah$:i0{틪 ׾>o:DXT*&;-5KkUY#Rs2VJ İ(˔Q|3B\-TkmQþq'vȾ :f$|g(Ko-pWe6BfDk󠩉< .=På H2FjJީԠei]l_,3g\$5h3 혽E!ZeP.=8pF 0BȈa0Og|;ۻk.E=6! (Ξ P = x`O<&AKv`iC(jHM{: g/%lm=Z}@BmHK3d ܆"xMNPFIr@C"7cʐ3bb3}LmyԲuQĬ*T_V\=@l)^x{/.I<zt4ܒR,JjIH ^``Uzb ♢x BVH >l-#-*Wg~CUBJipʕc*ʕ]Ÿ2__09 J [*щ`hg[ YjD=R,䅛hMq*= hŻq[-%S.m-`4VRևpަք6+lwCAUoUɀ;Ãv]tw0p0mźaʱ]ө4^]˚4g|wuA_HH1i%(;8fsȌ3;إ{? n0p4Alp/-A<`[^v+d 0-0Aa0~ ^"n@/RL4Yy]lݨ}?Go Z]TJAOI%<ťCubMYSh몝LBAH'I)ZҔ" ʸAjCABCQ6V蓺%87H "bzOS,5ѓ(]T;﨑9E4!\\XŒN*+*M^5R3pa3shC̡%-lbyӾO{^4 =g=$j=>%Vq^+B,c)=я޹$Œ|" ieS7ue7H8ʋH8j/ڴ Dp%hEbɧE|[ؖlK{\.]}Qz˷6ZU7oڕpsc L3Å c4FgqglIWcT|N3+tyl+#@E'$ƪVosu63>Yr'H֦^=ұݖ1{Ü,ΎPW1Mw Uyu%JJF:If*DPUvYeA:!,K゘rgo +"!6%NpJKy1TК!Z N1+ Ix%V>!jWj^f\3.FeVp=57B#|[+KG?F|9>C@O"k˫?\}:OGw;'E|^ -θ{_QNŁPB͕ٟv~(ouJW]+tOؠEDi%t lU83!]S@$ c R#Bh!%vd](8 'r*T*M5B{$)HsmAZ"d'ChqR +H!crD fA!@Eǧ1&jEbhYv?6Λ0RS!HY̤\U 5ho"38A"DJewduph~~@5m8dy0DN8 ɣZp |[b(oS=A bZP!h[RC4zSv Y٣45H6)`GlfR)4_c?*"46fr2Y!b ƌl2ǘ{Ǿ;VŋhO~cn'i5'tBiXRǓf/=\眦jCY_ۛ:ȩ!F%Osr<Ea,0`V\P1.w0x"j}?2 dBYٻq#WTrKCjƛK]ropr 0DIdklQ)IQcmM-l>h4eHڽsj;9Ev ayKHQ"s?-YNw BkcN3̈4 oJh 2B: JLXhǭ+j6/B˩Żܾ< *[>u ^/3&#% 5O5m>TO5V@3I 6XZ&N()%8#TT: ^oRl R3aolgsV̼{. rNр:E\;@4osbӅI΋`1H8w"wJ7K.Gp{Bp$pY!oIҥvjfIb{wv`PX?dGsҋ .kuHB pzl1gydTXSaqF(P!d&L eUwߠIŮsz6 @)CNG+!E't>WgzqJٞ;Y|W"XQ"vx X!^Г!,!ԁٔaʍSNrkĹCe!bsZ$1C8) *FbnE,քuO=md]b@0#-S'!NC ubaHNՙA'1X3-Qk%r)JL5Fg@AGK4AeC9eMVs z:z )./e$#dK_Ey/ш4pU7%~Db`7Rq6b>ߪ2wFG; =~'Tl"_C.:E'9@HHy /-:Wݪ76VSgVK@&Й[IܺO}X1\jvc1=1u!H?eB}ux{z|+ џr˜esݲ~o~8vנ\?ǵ=.٣ A"hg$ պxtrkp/- jn||A>pr_Kui ρԸ\\7m|& eMF#A.F[<Z9ُdˎS_nDdrHLNHEg643,RQJWLaۏ-KK%s RijrȌʹ3X`m$2qUb)H"k^Mm%d$@aN69Rp褶 ( c6 s12mU;.Dzq[dhe!E,)63WkB%dpc3.JL;C9-2}6=SVLM2nWo`IRk8>A?xĈ$@YH99t|zABr|20/B[!+ 'cfBJT`@8RąGT"}9ʴbr-)(mU)bƦs!bU*̅ʅg B^L'gD=%7ܞ<^t-/YIep`ҾQT^B.I뼝ˌ E9%٤VS'{ڐ13. (%+ыRΗkTC, *N䡘U݆lQMє1FPN@퇤@<S>9V ay!>k,Ԁ-bEFoeʇ<@}p΅ZqVQ>qKIo(&@J~s .fgH[ 8CܛJlƻ, o>y lwPp!;* ؀H LKY,XuSQ?pu&y?Ohա~k+dl`HO23 &8o|uh.I}oNi0UzbAYѹ*Z׾G|pSmر9{oX#+­JsA/41*e;ǀbnteUj@Һ"/[@H%_]FGOc{9bnPN|?mS_l?bc;oGw3 θbE@{{Y[P)ꉂCv~We Ś!n'w70Bߦe3P__ZǥQ-׏kf6n7n9БHG"V]/vCFK9THO"B.ϥ;K0#Yrm3|`Qٳ;O>VT~12J(d_޵X-F' CLV^.0AAHs 9gFY PrDD:L:,@g_v48 *)a-)kɊ֕?}޻~zpI [hZc4å漮UrfB JdaC=!ޖZ|$XLԡ\ȸxv*f*/?SP{q7 ߎnM<~17gs0}ܿX֬'cxh~B!Wg,%/tny+29:ebpY< -aYd0 -J EnoOscQbZM?D^ZϊNeSKM4Ŧ((v,#ѻbc:hݎ8{tҋ"[M4ɦJY[27R{&Wۇ +_rCDy _D_e X"8$:H5wӪo 癬:; [l=$N$:EnS:}{>XX.Rpg/pCn&)ڈ.}7DVVM%_ <7YI^t bK;A؛[{b^=ѻ~/|>8-8!2$nSI)z U=\:|9xC?Iu<4B cڞIwPGWR>I/Yz9L]UKE3gдB2,k SR 2nX 8o҉f%F:nhZG9C1ĸ iF9 mǺ5tćѻ2]z?#;lo@ R$9NMV;q9Nsh$ w+M Qk9 -JN>\j Qd#m,23øzw;⦤ik8һa!D)$y:Jjl>MWX&׃ɜ"e3_L!#3N1UHՃj}Բ} .Ē!0WR0mzE`(i=:Qow=l 'T&UOp+p n:&A2癫0 + H$zI{ p{vm ,E}63Zi <)@ 4Urq%!+6-]׀6TxwֱO-R2ګliAΌm;e33B1ɔ@0F[%o-W4Uܷrl-x;$noD,QZy5)Ui 8ތ2 띖Ebԣ?S3m:X>ˡ,THXRGwOſ{.)Z|z{QLoBR˛;Y=;%m|K񾗏CR!1]7yPгP_}1Wm;{'ʢ2UXOއL ʽ_9%89aLbE1R00S n_|oXINX+ԦB&?yD>78/ja>: W]dl󏙶uE'{M7e_yLsiN׀%wp q."Z甛a8 T*uf*6zIRt5Z C?VlOOvi'?$9 I#Ţ"ZI nŨE/V^o/,>}N&0똚eo@j ʻ)Rv1ZNxOvNcw:Xvi$¬9rH(P [S3 iCRprFb&b% aZC&\BHuVA"Źt~ € )FcfĽW=x;&)/eȕ ySf1O^ tVZUz?,י{5?*_}-*d2{]t;MDZ_UlO?lXw^OaVpU)goSwV|0냀U1c!!*0%_,øoocMҏME6/ 9?O/} 1~>JjOqKڛF’S@`F -`N6A]i7+ }bMZ``ybw {=f0`C}U}dUev̬, Anue1/#"ŭ.NL/-8_ì]kZFf*(%sVuѦ"_EA=zrz%J1yd-G!$p6W7G:FxAD_H2?W;Ah`Yr/i:c͆>:;=rҫe>)heӁjRr L1vT[,h41))Q`m'傊 ȼqEoDd+VIju#J1:sӱQ*HicMR`&gA6km#&ViVMCl2SD2j_^ۿ- 4/RZ,>7{ ´`9פS>|ysaKBk<-#Du_sa/Jih/\xsM|*R\nK77n*\[BÃWZr0 @;%T$Ԓr~ioOmâ`fշ]{doekی#̩gmC)%@ЅbcVI122UzOLCZ.r.g)OX=[d|/ðW&+ޓ*`^kٶ}8[KAzCEpFFXhpj*x'E XP*F4:HG*S)CgUmt]/tսb DJy 19^gp,1`YĐF $A_#N +E J;#+W/#遂,1,&LSj)aDQ̀edr O(Z?͹^͗v/;A ӝ?_^:O~>w[F$"}VG8HL;7<:{*w^T '7ĩ9ê ֫O~pnn6>k L d3%`Fg *r7\Al# wry&AYCp;4߅nn> y;oNO~y)G'?<|~7~j3{~OQCQJU}AK0б}/kOn}ɂ::3 LI+^em<66Q|j%tɅ؝s]~r/??Гea^9;T>%(뵃ʀt`¼5K|9-dvД$'E\ AFڧ|GdBc U)"5ekopH:rZlz"C%6ȨE6wܐuw""@@A*ߙ DsGBleSS gpғ& tk A՜9%2?EnI\pY((v5x!fhQ$fecm$&^&AG:pdOB+$ +(9AšӤt]-9`RּZҽ l CUidI(H#\|40emƎYoIު!dp_/JV()oWNUZ==(UDo؝kgzfrյ_>c@K[t%*$ 8隬,T£w4j(8sTvp0p4LVҨc>h()rh"rԚ9YP:WPKW69{DSBز%Sp"kDa)N5x[,-=CbMF 4TR%Jy4@6AR)[f% iڂg4vAd79c-il"&fA$I520 B6TV趁ί>(, U!7,Tkh(3ٔc&pbR`epʗ|=G=8{L Cw#h|jpDӍ0i666,|gc{4~H>;\i~P7<\әq]S'r07n=1_|pnM6LN)xomz.p z{hveٿ~{~Y&O>]͛6MK6C A[E0,VajWՀ ͰsV6D``oڳ&*wmM 4!p9ta5/a&)ShEF?hKZ `6Y\~j_OgsvYw'R+;_[@0C;vx5h6θjƿ۔s6ǥ}vO2xsuqp}jmq 77\M x?WwL_oG1H񴍃X͠|w*WUY+7 T s Dhj8*!Ee"]w>)צ|a/UpyBQ߬[2v.}aϖEExY8?ޞ}APT{Q ֻ:~:B5aSOr<)P}V #}c^Dx2%_CXvxE[UbrZ׼ud4%ÜJL2x>Io- p}V1?-W&}yifaFrљ$psM{ٛsVQF$ɘ Q8B q嬑MZ"9mX?Aa37Kt,mh™r`g3GDG!dOIE ># Y`~Mn t5ԧ/򀵅6~j`+ W\ Wu@[}&+ASA&x拦)ѭ)=9XB QHDMs#PG&M2Vq xȼ)I-/}jJvZ),M{'wt?zzDq/i~='es~,?W? |/FiKbw/F~ILJ%ro5ID3Vo)yoWakxu}w䅋_8p]59c@…l",{_1ǵfMzs;w2f9C.{8"ey+Їìo}]<ӕkoګ'W`CkhQ%aҥdIt\D[BMX5яRZDECQ1iHX&n#FikC \+~4CVO [Y򨈭q2C K95RpR H0yր \SD >VպѼZ5R3ylB);kci{ yk'tŮ)H*uŮ/vbW?'[iɼ5OqQ}:ERF3#nh-;U/ _7-ށ䣿O] ␮h E`4qڨpy}>0ӈ13OuJ|ٓ=y >]w[Rd ޠ%  '8u28Wn56eOnw tjQ9 s@ޭ:L6rݰ)iSVDOwYdZ MF24C 'w[bq9(_@5@s]1y'"(Z2kt+f(\e6bT+0R)RF74+ɓ/ +B6m]ޅK{*DFs`frAUƛ0AT $Y4d E,zޱV ^+6:!" 2:ʎ_9ػ$@!΂kaJ"PL*vV]Jϙ?(s81#wN j$WF΍0v0|$ȿz˝=}Zf_wI;zSh_Q7 _n 9ݾЩzu \\Pԙbο)ɓSFܮά$h >DДBASAռÈF+[]緷RvdzlBR|s/'{| rL(^_sPf/jdjb2!_YiPd43Wcy9LxXO yg,ޫKxD= W!z:ޫrټW'W`^hHz%N t@[ `|Nڞ]zarjwb _3D+:+UփoKyMnC 1bpIG97NF:s6* jTLX=K%zռxh+>R`6>8h l(mDU `BqU_-+֩MڴZÿ[Tm3J(M1r!2=\zMmFϨN*ɤĩ`s('G~+Ս7'_#IQIGh&C6*#`rΝv8 4cJTc̀>kAu ku@F@1%~ /8\J496SN`Fq]4! 44)&4M#'`j3('ϑmMӀԬQmobڠ! %Q1ɝ(Fpqe<B)-$sf"J1C\Ă+,4zA6KKZE 9w"/9_&Lw {*\))Č_+H/3lx^f5 a/k4<ܲղ 'DUKI&,u5 ( ~HF|q++ 0oFJt:%.uMX{Zث {$0s CH$_g ս0Bˋc|A MA*">ēF{Z?rM W)ej`φ ,)_cР2f`Yp97jvWMRyifoO1]P9%bwO?  0:)W1 b-ݲ\wӰTbL:y|y?w^e69r"(Fh= XF: 8?iLafIZF畖'Orr!$wOEƟ]\O͛Ț.&(笯?~xX7=a_ao?<_}K >"\j :>QG\$cf}̌\eOmvnaXTW8~ ~,/ A5#zhB=2Ta xeI)>H_~ݘX b"e` >jdOjI<11PZ#=Y l,#fW7yl|8uSGM3%5JJԆoL\]]%黃if'/ e?HpqUi7΂!0ybXǗ}}W-~_evY k nIM U%8kvFVlbyFj+\8[ij_N?H5 ƱP*HRn}~IH.avL0C'01ϦNӅi l1jKcyLOAhkQ-9:]R2)X:VAAd}v֧o[lIkJ :Z6ڒ$`!C Xl#1 S:nn#WSESܭ5H/K/5%+akx:6OT nW|+Ce!9OlCQU`8+L%P 1+;ӿJn9) $ӬpCS]w@-m~@8.e0s MYQovސnHӗAn>M/N»%Cxƽzaצ$̏\hMISɁ6d߸.ۻϷ󡳗̘M7 N{>z31-LL:8uZwѨQDFx`zLaN#MaBmj*5zJ ;(f%ow A\FN39Dzա)~i; [gIs:-PV `pW{he14+<;<YLԚd]Sd+;GjMx_7|qGN;k95`W҇_ llR%PIakué2LU` >}n3ùazy=QQ49vFD&gɛ @ݭB_L62lcBoc=gצZ`@eU;`hқGWUU:aQP 1aL֠{bD3\ˌ3rQLxY՞J14 k2puï8p MM4"?hqr;HP"%rcxWQ&̐S)Q29i{tF:2"搯1Mg>௛MnQ5e^ė )O,vN WR0c2h ]ڻX*Jd ˖k<Q>rv~! Gq6zz&)XiiE>ḵܜ#I9 '#!IZi?='04Q/ 8(u:68l*J']vm_tqdU FH稵B$/usVD/EHefQFa P4+ v~^|W(yd9=bwO2QjyA tulir tHaAϛ(Nj[iJ9C.7 L,ɐqMYs(~t旛X- =&U9 x襟|=zO7?~>=>\.Vywq^_qIw|E~m+:*[ toZg_:V> UaYeB659v 0o̖Y>U$uA1<n&"E*Ń]<}|#^ 5R/"ŭ7j Hmii95qBJ2(q b9nhg mUX@ZWsMf T?U㜝*pj#ilA+p ԰8y OJyLԕV O$1u,CpOXm>0n5 m9 ʝu ݑ:6P ׁNsV7]x褘2ڱ1juЩSAU|V|Pp8U"N&?FZR. ଊ"T* QEInM8w*Φ#m.3lqrvnOV[^bKGp&yt]0u8#: D9jI?;g:r!ϡ`o>/]S ineN[)V~aKz /66Umo1zl}FGz.'Y{-JY>zr=,P#K5LE/3d2E5 jU ,Pi,-'%i%U=·{ɵ&5@7_5*nnEI»C.1[$ZHDlq>m$ s2{n2$p'Et[0:{H>!Mu&QߤMB۴'< MH4>zrc(6Vǥt!Nutіz8l=*J:IfdQ`7n5'7= 6v)kmFS*12בluKXavq?.k%e ]|-w- lMl+ƍ(\|^Z !*O_X8x?߯K*|u,r/t4v%Mc{ t/@Q+~^x8t SBkf Ց |ͩăq>1iM1obo}Q10]+11A'ex`O`*)3s=i&9li3J`9OA3 5UU͈b0ɭ5<lRXPךSs1kY~n.IDٕ Yeթ\D{M Ur. _h5oH]W&>6vj9OHP_?vl<ɔmpд/ WiGM#& a KT(. Bsm3V/߆b[#kծԈ$/ (ktaNy# (1MdBrͲ)_(&L h2щq_29h2hM % IY=!^;K1xwbRh>Ϯl^aU Ҩf#|+ZO  EeR;FMҶikWJʍL?}.xx.]N!!؆*Rr_w}A߂)1ԬbTWרiO9k{b+Sr ~՝N"/͘a.  A WVo0oFJPNɤس* &"qgJQ\T3"^x ';/k%H=W).Jc3|ٹTYy4X4 z^p`+\i3g7k<"||U!h0>nN8~u }cޕFr#"ewvjއ==Ůw {"@73`|拵q63fJ]p"8(9nJ٦0ͻ"Q5Y~OjFDT bhhLrN#}̐W>E)zchI=Pu\V_A.HlLj͈J0 I&@g3,Bjg\ X _Ffl i)ȖV;*뿝y<'w?<-f'fv]Y?M0cQ̾Y?;e VKC/׊L RFEcpB9sJŚXY0f HSFv/TtWoisqW/ZNGO3J+J 8HNq>>[=Tv_V>LܸKg\].Rh 3Ӑ0&'|s#Y;.Fe"Õ;7lPvg>?_xeHjd&I~W:rؖ_t;+:O_1)ٱӎh_a 囎keۈD-)N&5綾D<4BQK41dn!3ɞ˨2xŒ?^}EE3%cQ+sUB3wnrrpv.ȃM-ؿՔaJ`"ȞR7A=1G\Kƻ[xi:ֶ1ELFDbH1YCD,ba)-)Ix9u յAHx]R đׯ5jHψeUf)`v5]rÿ[]/m.!̜]ggDSl7 Y`eYT#M5DM핟?uth~_ySC*q.eeKmK0NQN*U5KH cv9KdQRávF| E8(TAJbLp] %'s%!AGUb٤H^UZerU=l+߆ߟi˿Hͺy.gCܵ-^v@*.$^1o]<A_c.n"o=%Џ1[ԉx{6ج ӓ_V}}+~Z𧣴Qۺhtm=#PZ"=yŋWөr[ LJJ6Rm<4J:$Ko%123ᢏZ]=J1-Kw~GШ̓̃RSNl:2?.>I0 hYt{09Ɯ+juVft>Τ4q^߶9"- N]},5A8JŹWLwH?K?SjmN̡v)8js'89b2)qmx7$ ' 8 3S}hyX 4x Y-kr<t$ԔYޑ3E%W#gxQV.27/]/ (zim,mG^^bJ'p!VW +0Wrc2N<Ѕ> s C_ևw䟥{YwZaQkbZ6 _Iuه9`v`ܚQh$Q q2IL5k)J)*Xm%8 q\0`NKdlRU¤5.M+G1S e( <"J+øgJZZ_eԘ֏ؚTw_ziݹFcw b6H~c )ҊLV/ӑa9X ZA睊6z{fH sJMh퐊2 ➅~z9[> ՠx<ᇻ/n3`-ng&"n34p[ pA3V}n|E6ui-+M\X1e.`JJcfe@ fj*/j\$i)4>ڷnV79uK f>u0fD"4uKhuc)4n%otY[ѰJò-پ~ntr1HkO7`=,‡NIv3fz\5~XlZL;{߃1퍿]/d(0p_l UE!DjzPI7= ]릋dSGbz%.x7.W =^ V6JY:UYm*y9=jo ^aPz 1u\yNV mء3TeA7~ޕaWzV'H܍e]G!K$B^dDeh hlQ< B z;U!oIœēU3(̎,`F:qxiJ$\!A Iq鲤sk1B8Rh";?#qXAMjڏ?J#4Mس?/"&Dn zZKei^L> O~GajO1 HzrΪ꫺w_OQñ{$C6*vcvai߉(B:e6H:nj:^DV˜{ Q!r5߅AnOTZE5>hpK18W>ʄ#?2ѭqwmm|9'ٌ=d#Nvv ocH./rR&p8tG_UWUwUuIN O>C3J`W%[ak1a}  !BŞN.InAĚ5\ V;0B֋\KJHP fECpsߟt> vO|Rf0m i TweGhh|IN3 <׬)/9`+hkΎL/quLp"nc~|@\pu ]J]J˜cTtU$JHTkqbW]Hșh@&6h(hŚHU::5+s)KU!אTI퉦Z]Hșh Һ7ۍ^1!|4%(覥DGO#^"IKEa˗DX_rH+JUf4$ Jn dqHvo8q 04K:Gl ԢT֌o24wÃK]*F=RnŹ|++O_QN!C`Ϟh6 ѥ'O~Bdqf1VUcmlzݖ4voiP(j$D4 [!!kBu 83.q7`b0n:l Fûqlq!!#vVK! KbtvS{ ')7ݒٰXRLYb8Ƃ`sy H Xba8"FxA<beDc0<=L9Ӕ J0740k0R0iay=0Mu t3 YP=C -dZ}n$AkgXRœV5 ڳ}na>fnFa\νqw0\p?gOĎƅYy]7dKO\阰S]Xgq&L'c{7 ;ZuMb^k↭'>l>|j=,1yfMii /FqcfQi#e| Xl9ωwyqk2 =oh=Ʀ7oKO{YB p,B%B] ˭ʚ3)}7M(yLv&&ɭ:Uhvܴbކ8"ݙ?RIZ4hU#\޾}m/{3w_3!>OmQʆ7/Ҿ5ؖ{if7T{4N{N|Є3YsvY;45(ű9ŕdA'@X9{GL s9f3` &S=Z糳*{1Zٮ džasWylw̒嗟%t8S<Ңdv 2cey5C!P^tudJN"+,u-ok22o&˻n^LMQiKߙ<'V,`Sc ~7>Pf`%WT /a9JsNYL-;y3Ձ@2 E j.BA]րN`N 2AF& ђpN,"VtxFP眰< @Ą kP@J '8RHC')dY% UC [$F+ Q[`]熮a%dŰWfu)0^:1NF0cH4e?f[~:vTx`ᇟ~~| -BO#>~.=0NyƏ~~x& ᬜZ Fw4̅?ݴS[]˿3_:ws~|$p m#z[7<@RPZ[c/}qK(hB$f8nv4ͺfU>pJ ou9޲$μqBYeKҲN5Qe SKOzȩ0I SsQ~$Ggded( z$G&P93BenE(u,t%ElyH*^s^FH̏|;/s@,_)tغs+=%rC#t< B$1?X~$En`XI~Bs)z}SD)?=Ӕ_.a5Y@ %!3tp ҜLW&!=JPlVl嶼:FX]Կ&1|hKw0B.ޤZ@`Fo@T~Nh6ap3gtiw~Md\g֠5{ۏ]wӛ“s/j\޹xǎyIG7x{&qQ."{pZX |R#Ӣo]KkKyB#f^?y3I_MױHm0K=!Ul8IBѩmxLG kL p8Q`^;HჟFw$M4 y5aJpj$+.ڭ@@.˦KD 3kdh4AaicPXXǂi.JjI@ýCxz%^uzFA 岩;RxwM-Ly{&+AE!+ R{c>9J0VJ"WWo'0RB;ATJ#Fl~ Cȑ gU:e*s_M25gտEee|{ {](;3y4>p]w˔ TJ]w]lSFOۋş,\Sn;!ZEJ>?veo+zt)>8"K33 LfT$F :~Ayo?P:^6j~}j~XE4W`XgXnCDf4DpŨdx`ǰNd9iyW=6 ;k ɗwGi)B:1lٲfR4V|D5NƚS^I Ȧ`/u5y)E+Uu;ܿT1 H؞+$2;Hh@?UH?JH W淀d3 "yh?6FB;q8q ?~b{a|(Q"oDF-KndHVĀ%8bD2x'sae #zGXX#TN o#C5zz7 ˫un}(o1ob4c&wa=֢ ?Vb:%J%R9&(G`N+FĠ9䔢vq ~:I@TE`}KjMSs`H1$WQvf =-"ozD2]hz?4v:̇Dҙ~Qz<)Xm۟1F`"~rF@%&v(+l? IA& ,0NgL(dQƸR;CiO|͆OF/ c%jM~ܞ@"T=BMIy5@$5|MϮ47@W n~I7QU46c~>PGsvYbܞfMaߨKBNo97ߌT10|Sͧқob}7 FyЕ>_aBČJ7ܝJk&q$'ϽS3g2 ^ C!FZ`Ad`UK%H!e$sHBN(hzTQm9˨#"+/7rn/{8`KXJ&#%j[6nceIdzŷz0zQFO0"Qb!!EI:L* ;(& EhaӨpJ;oUaR/! сwVo]Fx) ѳ˰DG-1V7}f峑 7h0>l! K(6W6R"a1rOVYz07̙0"L)261j=r\%f#\+6+KDW(|oæko1ix޹\Iɇ#*+ M1BSP;O{INKRpTݱ/Iژ5_SMQQf Bd:ꜜRbUu{ۧw7iMé{[1z"Aax4E]W]Cs@,,glbXfX{Y q.\:?<é0[˳&1 ijX) D9I,jJ3Wʏ=@ U>y4Ie:+"Չ:N(x=!JƨDV =NU1!Q**THgUxXa/D(MVıǙ1YI!3S[BiPŀ:h;AT ͽwVhoHUOge }a{RqgLŀ1Օf"b֨ƄEuXc[5(c(*Q)RC -fꑲ1GnSj6!=Vݕ1乸[v?wjgs%u_]JJ;t_nj#uZOf1@Ma^cO޲6cȂ|اmj7ta34~pH͞,䕛hM)1yػEi'Jub:mxcγjGS[MaS 4rzKɞs2%?}仔ܟ>:}m` +'"CLBÐ?+fbIJ6U0s힢~V6g?Cv&kܺa~ow;V/[֝E?q>>pʙ,vbl㢖W3h:cE"\!L)o8Pځ&f-pK|)5d;ۇ ưa^&˺[-3%bo:Q3kw}ڏ3Z-?\E+w;yJ0ħqVG2hAcƦ 5aJwR]T B&pak 6ؑ#WnVWO. E c Zȭ8"<Mh;ݹ*1m9c [ըxmCS7b\^cCA%@w6ZZ6FY ֻT5I>CY&Q^^C4K_zcI:Nr|׆0JG-4!>14OǶ'4ie7iw,HӚW7i*<a<^!RGi~; 441i9z$,-,GޗgGN7l9% kA;Ujɵ0K0 idB`(ċf@ Kyady{-1/lGa5(SZg#*ƑL-'R0#P,D)DTc=51TiK0b-5ThU*Fvs[&|7VeM7"JdDG6_Q(!$23D)L.X`S-zF|EJ[G2"b=wFN@XLs1ZIK0_Λﳥ їߙ F~dnNW7%VU@7yC7IqU[)VoE lɀƖ=ӎ VRhH UY#W@AIeS +RvHQΓZNd Nb.-0b$X˷`(eX T 5|c a#$vJ);C94.F3"lq`i* cX 6AO~kJX͊XT)K)O~<0d,䕛hM 19wSjޭPISFw;ں j5nMn=X+7"zLuȻqV#);w+A锶ݎѼ[qGS[MMQYΚbyHyCCH@XrYrLHՉWl,(.ѩh1T%%o +ĈlB6ooP7O$te8|% HA jRi"ƁgY SZI"g/oYח,vy7U`&8:pw"^58rұx@\Tr}Nwqe5[簛0[ Κ {+{n~5^}ߵU ~>՝Evkm!Gxg)v/ώ`| eؓ ՘JB 2GɆ䩀>;n9IRO8%# 4Ip`I G+=hG=( A׻beK)uX̐~'dabfKXmė&ty*`(0Ј37$,Du}DJP%1^H+a0oTQ:C&WZQq$=+ZZ@lTudGy Lq22NIH+Q^I+Fba>̺Y]P&`FSCV'Nt?VIUrP&j*gd4ѩY!=j贙4y0oi|<-S5;gSk0Q%Cnv1zJATY;ŚBbtM=$AN '{;c0# f,Ncǘ%Z:瘟7%jޓG> (DYgj*1C|'YV 1cgxwܪqBWܻ΅Nj90ME,,zcd)}ELn6rs{%Tu}u= pഞىv.<jUʎI` $1 ޅH(ױ*c.h,7wf3Ӕa&p!fS!|.7N0`d.wdΗl+:6\+6\^]ۭ4Eg}JZRA:6W\-}E-Wey*]Ǧ4y%i4_) F$rn*b~~(6^ KbN4{FC&Bz r$=,d9O~2FŏN{4pOZϨq @OXPܣ"K66t>,rV+蚥"ΦMR#ghߖBZZG;kEEYyX`/'s֎YRFOߗ5H#UOfyn^>H >BNVfSل-=-~X@Z;# ,X =eSHפ?ݍ]Fď7J3p]a~E5ZujZrخϒfÌ3Anիc}}H@ cw`Y2jR+ šXtjFr\qƓElJ6/ҧ>]x}D!QMyR}N$YhP4էTK*m(=hB#JI>o6Ci\pBzUhMQo%B>{+%C&*J$2q/l)OW)Ѷڶ4TCPQz(Fh;>)7Wn%\{O^wJyVyeY4yp൝T iS o?^ ܎kyw%k`"PlwW\:Ѩ^Z-ri. %*]&/E-lD_&ȱ4Ѣ5e&mts\q6iޒ>KRwe({XG?4\R/v:?k0xXcX'_:I׍Y.UqF nJYEօ>^}¦Cϥh 8I%T:A%)6A3B" ,k'Nb-{nhic ƒპ.u4Is=[OaHp#rހ@OG=)OK^yAVH3^;CyZϤ0oHC4bE iӰ`v3\8e_) cUU?VE) ΃C (ԗ BF ۈ P&MX?O>i)1f} gc+0W!CTiy>YEZ$"Ȥk5vIt?ƈ&:c>3qQRA"[+UF8 ԖF=r\xa5)KWOaHsPG>-#rCvC,P55Q՛Ed$=T<%Ύ%c!ED[2@edz XZFn˲860H+J H,( ̃>Ʋ)Regg+퀱,б/C6ڻDUX'2rDIVLdupfE_Qr |Yi] %n%VWmxQ/wlن#Fv7b@kF\VgI,!4;j6b?' %ηq$8of,VvdwXHї+<ƦX^=ޝȄlj~˟^,,t-Z^Q}5I7 aƜߙg_TOc1_,~{xEQNJ\N3g >5Ϗ =;`%[][>_-K&4dgKo vF4Xkǥ\W&=uum_S9 %5?jVKڲWT׌p鏣wGPyM?ڦw{65-lkGyR^mM;l?EɅ$g[."Hp"?)oUI;6Fɷrm{O..Vߞ]~ ǯ v˓ciښ›O\F?IeZZIxqoe1^vUvoD>8}@5/Q-"\&jqy6'w'wZD:Kn#n b~nk#oO_kru|rqyhբwoN>X,~1W/ϗzUE(1/Yԟ?s>O// "Xo͒*oK!+LOx7TK_лA t~cw{Lթy͍m |MKaw»AtF~cw{mթ1nORz CtsLzj황6,._^yyT}IWb--oU8G g' VWK( ׊L%[ QM sGZ;]_KOuŻOj o`Sԕ`W?ӯۯVFaȂ2;kkkƁq;8/ŕv-w w޹ҌvNUNJ >?h=u/b/Lfg*/p4v0NѶRm6=vڲˋ)6|s;;dLN!ҍ3E??p7bwZJ;0YJbxp5 1XK;;mmQ Z)?ynք-/5i|Q"TCv PM܇qpդ75 ݰg47גcGmL53/]X@o㓶z܆X}T[pKOW7Sʠe74Wݜ2>jlN!_9DGa aWލ~ލAS_o j{Ktĭލxz7Br”{q>nX@g7x".\mۻѻM+FLL64u_[>7s̻z8[ރaOS}J5˒x6zFQԆR3%w۰ tJ%MI8F[p1XY@):mrJcbŲތjKdcRX hy3у-%lG{ {TYT\F]=f/<6{ z}mC >E*K)RnlcB% >E*8Qܭ/{RX޹#JЈRAӆ 䜂#'~ >90&s_+L9E}D`{*ծZ^6ZP+a6_|Ph(=hh MsFͥCRotrƟ߬5fwt4ҋƐhiNhNS'OQ}J5/GYGT]o^o~Z: mL(]3I- }z n~ d{F2_IQP_Qr [f FYV&[ޞ$Xǵ鿛:bLf:-i1D;Xw1^~7ʌ Ѓװ ~1A#(H(U6P$ R oX<6yً\m[;z7Ds3ex?]7Y>])_Dj^{I.4y0zk:R@@x7ϘhƛqrHۻNǾ#렋ER >7a鶾"GjKTZiAȍ(QMH0g)\tº}O@@JRAs5D '`(o~x1k섛;QuhPN o;=4p\?|E״ndY=ġC wD/gI-B=tu%QSoHDAAY:g-Dɪ)2SsX xRz; S0u`(Q9vFc..L5kƼV⦔mq~+_NiN`79dH$-{Fh%X%WTyvZް>Ł2$ m -;aFeUMf?OQ}J5/Qzx({fs3`ݚJ6[͛d;?K=a"-j~ًQ1w|w)|U[ZMI/wz83sǟnc^ZVP'?- Ӟ'vҍcxk#d8zqqdy=,PCdI5|̢MĔƁnL;Ҝ$\r.L15ew6є^XŸUOza0%/Ӝ=x)}yESzDA;nQ@/f_\ 6p)5҇y(Rܩ u,'3C;b4ag1,sq&#W{3/ts3ԣs*"\O`.<}T}#.SR.輩7?~0ZA; [3H~xOf^r 8OMqq6N_sc}TSU81 %0*l"q5 7S ^c|[`sq28G?&q8/qr^B\>\ƾ_8۸867퇻vrx65˦W8 ۄj{--#>S*Op_py4By ?:Fҿzإl|aZyI ZECr ]xw|{tqî>\Kdo7c4ŝykBdYͅByȩԄL WKDm<pB, .zhrZ!s"6M.[gs.q\DSuWss~̧IxJr6GW'ʗWOڸTMf% A_wۛǎ+_1#LVVM׼ч_lXqd2~:=o#B3+t5M&#\}T)J"ŸX-6- 2,I+00f:u,M-qw3Gӊ5y%֡ʳHo.B<%]H y9xP QQHeL \d=Ai[1azgEN$*UׁϑY8@3 xفã/S}&xsKO7{NƏW8[ܹ+ z%}ft?_?:z?4.-Wn~+| 甒jharp{f0-Uz6fplŶS{g?ƶ哽L ۬'}LWn(EWӣ)Yy|x=]q83q J !:cKKJBJ F=?׎z?z~ǐeCbh\.3_ufrP@@1P[8{!'/FR*GyEãw*e UXC5MŐAɹq 3&%GL:h4d5G84Al;~ײsbWV1\W!^N.cnYe|i+k涊(Lu )FZiw[;{g Q:8tFv3Sg(j;nY%k4Ye?Lg"fd0Y.f_ "7=d3]shpcf+=}01|$n2}.9o6{6)Y*eWRvEoK= M/V 6ܛy9rN\yo1 <ُnO.x. h@e눾%OINv%wㄪdJRTDߕ=6Ol-^\T\'[`i>[9=}{5sޙvu!tT~՞>;cyJR^]\Ԃ2gNB̈JS8-P68B~=7M?~igiMjɣSMY@vrU2m([2߯3BkC)}$DI#4hW:m2N,nZ M)RiaDJFh&R6whFuҁk=қJ!_M S-]DVJ[HJE%Y2EtI5Lim+"xH]/l1jEE*VLz;J<$bvZA-Aiac(mԌGdF)FhP炴 O6zidw(#nP\xk)_G bAe:!0C{"HĠ΁B CmQdGZ 仠H9AENΆ#&: G9I*&P;@g!"ܔ:ru"Qj ehus'bX/uTvGș'4jR8%5S,G '|P}*CX F Z0~^! $8D:pM>eGס.sNU]f/Aٷ@f똱S.(3/pI :Xᐲ `K&,qq l+\w * Ͷc-SLs6755Jq -'>LRP%:>RF[>X#Tyq 8J64)zgFhALfZ5]ILME04 EִF i Mn,P>n=tvƸͮrYZGP%^N*@/ ԪQ|q'=a6[? \6ٶ3Аb p2;l#~3v[Vq,Qes*[_ fIpPE 7 k;;'铒NV>`:Sg4Ÿ;dC@r8Gd>= [n yte9qynԪ4PGdm6a7 xw;۶A]>P=˃zJ73d}0Bs*30>.<%+l^fD?8 ? x\q' 펨.c}qAV@ΗIQ5!<.:%+lYύ . ,ZC\@8ڄd}ሉ!yœF8|5Sh bXO T]zh})Y&#m<'s"a[AکAI|R\"x tmQXv趵2Ӱ>k$5qOI|GIc!oPb/#(ppQq)ZC- oŤ&E UTrQBaD 9*6lSfJQJFWY^v8-ŏOu]rUؽ`h|?掯Ft9<g#_Ob[ VpWr" FW˫kꀟn;0ōwem$I~Y`Eޗޞb XK/<ݚ--){3>$%(RʺHJ&hr*"#( %2|_v̿Օ-s. czdi?5hY9 UhI^h*y(>͇ɋq%2&87cjmʈp){IIZ{z!̌jrq(X}F,02cq\_h5Yrʱ )d,3h,.)Os̡AɶTP±R+=[VוV34S7`ޚpÅzN8 q]g5YVK鮦EL{{Ofۻ7eɯ2okLѷoHM޺jӰ} {>\{{_ -mo]郣[Q:wBT@85cr-G0@] AIÇw>H*>q:OWם:Y+}ŋN҄٘TjAT ^:bC Pr.g.7c"+RMt a1u&4bfCݓ{0V3.EE^pBK,wJI#K$ WRsYq.ށ(,^A6"hB;ȉAsDI$: {"R[8ږZV2BY*)2 _~#´Xnqo%i釋C)[s2Wݖ\&R_wۇo~0kɋB:-W?hy{2Q0ǽdbXUjn]ǚr/R˻NV4ﭦ{]?C<;wA>gMX ݆ڠu~Kt;^vP0mUO+ZE:itm j\g7Dc" kt: V6!߹n)3\궞0xNFMqNyg B4!~ 4 ޯ"li2 E^R9c3kC@6Cj|ʾ"fמ@U}n eiՓRZ`:kF#( DBm^s#A*8V{p9@j6U'023 *hBV'KGʊLiZ{K `.Qv: ɰy*--ʛ%5O[`?ωVϟVGS),Řtܯ\k}`n(@NOJ}M558hHZXw$ p  {ESFE"@'4FLl`6S%K4(䖇LLdMcg6ec$sn !5{qwO9\u[3VXةBP{09 S|"LwJcv"ΑZexGoyGU2< XZZj_ýʕ&mU*Jimka ]A\# P913cJ1"'+BDg3uJ03Z0f޲qmd2I8R-Lp*Mc)_9.&^!UpsnD~7R@(VZ Υm@brq/>zgZK\S׀pI"+}>5Hn5 w,;ʼ;_*&3!JbrBs`W#sDlDd R5oY_IyF:켒)xO c< .+qiwI#&9bX)5}ZkFAB\ e\$ju* [Њ2mdUڍiD AS 霥Ic$H$,-7I1+#c*ͺ8Cy~_;YP~_'_av~FMK QImugkں)#/{h"~Wq1k^ʬ?X'7Mt@*"Z-:썢EgTߋV0ɜ? @rlh^LHXws=,X8܍utZ)fA_vǕ/]J<}6A*#.}Oob+9\D!b毉"֤);7'Dac`7xkF5|GƖƚ<CPyD{},UC&Zv""ga3ʓ9Ot-()5NPqFH _>CP xx|Z5WYBWIr69Z?9 rSp.TU@. ն7l[pFI!M0&I fRNpie-*wO.kĚB(\ 4h!Td,-Վ.HS}feޝ/A|*]RfƘQjOܞړ&Pۉx/׌>JÚhaJuPB'SywVwOUYq#hMkUpz>eOVbfʓ\]y~zs.-\{zd-('唱 }#cHE᠝!SBG]Nmxc]]J :ZV%(ݓI-0B QɌD6}+a[ S nr`ԛ]k!#՝\_zY%yFTldոt~뢄> L n=azW20fclt]2}E2]ŏZhU^e)t[*mx<ŷ:%n֍4mHРX'C[{5yqՒT Tgծw0# k~_o|_k%c'[p&T{phʼntS砲LJ+N9nv;AGڳqZC):Oiг&֫̉mԋkg {Ɨk 4N3@ڤ GY#rIi" T&&qWVq6SKoҪS29.eTQ12\L01y2,G/A  u~v]aWVŽŽDٽ(#TlG,gzf4Bj&k ;3 ~2 -(+>yMA ;6޲2Ηj9 䈛2ͼ4[o0lnğnN"A}ҭ`K Zp) <F*-_WӧߛxxSrRXc5saNl%f{rݶje5/B'D2<ަuӁJ =x@3Ɂ ̯?]<܎<~uCeS1y"qM*VSG5*i69*uC~d%+^~BhH-W6L}4_hi[X%[yP:W.!N&%,N?Kd+~_>ZT-W$eAy #?P:<@xTp 7:}G~+s=̴;A{7˓GI9^jI@naQ5rIT9]T L +ˆ 0 cI,,MV/n vV)>yؼl\\?ݖuIhlY?]\}m@?~㹘yyolxF霓#.#\{7qwLzU! wՇfg=[vfӀ%!|vSN㌎dòqw몃1ۀD}=,pwK1C0i4v7l5U!S7bv1= ݺSw3Os~/?V[_?\W  Lz6..dۛrK3_n?R=+-y3ͳ{=FҹIz |vA kt.*Aň!{pG( V3 h'+:NjOQz2 *^e38{fV!83~zf l36̪Hzǭp N SOòN aR\TIA ^#0{Ӓo݇SWgR)НGepᓝe?\ jED]ڄ _nBW;SG ɝquK7=3:BOkF+b_|ɉMv F{v >#( ,_ @ѣ?CHQo,Hy] sa@HRApL~ght<ؑ ByaP4,}H9*2{5Fa}Žn6dZW͎?e!]3cfitW}؄.}UEPe {EeTR%.j j"cavlB>|лC|ς;1wWz\<ۺ=!ݭAELH.j3C:i>p!'fHJJE4u@ܦh3{D1攜J5Xr|k冴nvHȜ)&̔Zgfh–D )2Y mj9E(QP\3 !YʉoT:;k%έseu IVbV(4 n#WDo :iXSB3xSfo ^= \1 mfI{q3J6U;*89ۧ5UKc/EE\Tק8wS:lhoc/Lye@nARL^d84W$S|ݳ_OMROqi4=Acܵ-'}eb`E"*4k\Eerŭpp%dE9dJ8ĸ^wqR`fX*^K5zU҃6M4['u*r:(WoةѬjÿsFqs(]N<䌰i9Y~:?NH.J7#l!"kpjA)Obf P"b"TX3 r{ _MX@˿@q&8[3-lKy.Sk5tk|hMۥUgC#@>1Ee>H3sa>Z㣴Tu[%] Qr(&旛k6Gc MSWt[KnӭTHƄa0FBMHξ",,e6(XN.%VI c7ͳB`Y#T~x `. "YN"ˠԖƬ\ےH4IQ ߳Pq;(DBtQ}R{頸3 q^~A t8÷S_u?_d($!\1jS@ʁ>F P[$QsGާ$LoӣVї[Z#Bt{AytZ{}.L(< %B ![P+b$pd0Z R9FC ,.e=RKBKtU gAޔ#\oj_h}:$Qord',Bgqy?Hg]~.ʡ#[arCڝ SGWXpBuHHK1H-dGa =^>O=ѝj*p[3"_R]FfΊ*0|!f **_dqq 7 & &#6j*`tin # 1oMǼOi 1CssW+>tS3b>P LYF"a7N>78!I P$].Z0^ I'Yif5g29CEFRیH"۩V3nM~@JޜF3T I}3c/m&qӇ*J,he^(k Ur& FCqi@ݴǹ GMe8סV:T{& {i4PYHhw)wPRNDP0uΔJ^*p%vyzE JR!OuZ{\+!V khI6&T'Qά̈NPԠ:3-.* 'MazE&LNPpC0~zdfle?b$a?"%IE;EF`Ԑ :ps:e88zp3Rxnl{o{ӒlNfeo8oiN:|c|ń$P*wc/q_tռX$ CDzp="<:"EbzgKXGĩ@0+lr3SE|\D쾸H*b;X'pܓ^S{M AJk@ehtVR*G4Dy7CnQvDXm5xb r?5~?c-&3f>#c5F;#b N.y^mQ 3;bKz 9 P\r9- yOzt#᫦,~9;R~???|,%r?Ǵz)̷ypݜuƟ4Ԝ7%9P$\HMJc t#ʔZeVq,xY@2 ~Q1a/nxӣU myo}.Jf4g։?TórQo_.*_RK<Ҳv,[ҝj KJX7o<>:$\ .4pb1 -(M|dũ@($':UU+w(ݤ<7ƈq$IZflAj1bՒ7Ryn]d^4U-s$6Weih)FLZ(FB+S[,#Fr,%/:ȿٻdWd}{vbONu}b@h$AҌ,O5( )5ŋh(Ua4RQb` M;CT #u$".5_S P,jTmv=⎵#5Rֱ֏sRa-VɎ0AT01*uhZB`4*1mWEkE~v?X> ZfQV]Z[p'$=oGMe40yO z _hա'.`8WouNaq@ms܁']I7\)Nl oZ]G}2 o?˵.pn;tbYv}焖 =Zj>זzKs{P 昉 XK=*w].9fNf&Hj@՝3],Yjy>|nvP[x _Qɶ}VHs~yfĥRt8&zX^v^L20}6Ӥߤuҽs2z 5ObK;ι<$LR$YuSֿ?73^6<}'CR 1YK mmmz&{x6 X#oL۳d|f6RfEHԟx}Ȣn#"y+Bۄ5&^Jfr@ T~¬6j}m'TmN *,btj,6KnG (^e5,7mٓ^ ڣ5ˇUC!sۢq~:c}T"^ͧn!oom]2]VňadG\-'F*U&M\|_mզ7Pw~hAŖ rI(i 3K9+`vKl E7ȣrcvpp3OXwXt_1s;GrjG[^` oko*:IF:W%W}q$ Ng{' B N-Ds} `,7G5m쨄\'FD,& "\q$Ni /患qZSA3|Եf_jM}ۯ 6!aZJȈ"y&e.^ -lD4BKI}/5,>W0k 98ASV瓞՗po}SEXJl;u%olZ[~qH3Aӿhp"7Cx0QC͈bN9KhB%xځ.E1n0Ix5c9ӌaBhY@q;T6{8 E*Ea%dc @詄|*!}JЈ* 'X)$%TK~~¿!< egTJXޢj7;䁉IL:LgI:ס亙~<0isZļUM$ڲhu IT#cxJodo8aanH ]@|KEfow'<0Zt}"Եp%;lwo؋c=ȝcdLwSkhK}h!h}ˋ/ ]E3!<>d 0Sc40|--z)C.?G7O@Xe>}dh*yƋ <',M &Gnt>pg68u՘!D'meKZr@ྤ|,`׾BkDպt ] .\~(9,zZhAۃ,>BN9m 3%fe!  vcgūQCV ae3?mi֞#M}a;m:vѣx}^2W ;ul~&pe CVƱ;gXCnPiQ0_8.>;jm"Vz L=l1y耸W܆D!qeC{un?> -'a3_ ǧpև>ܸm>X YgCPBX| gpk#͔2ud4B֘X"#%|"ă|Lxx3Ȳxjvw> +8`ϘiJ4p4Y>-ӕY \RA %~C(歆`1WIM} V-1_..(rR{5{}KM'J]XRsH3U.Rwçpۭ=p`s8{n꿧yX5ofUbe?'P`m_jcr{?$}F&s3#p|NCBnٛԎ?osYSq;Q 7!_)zJkL{Ev68 <$لra \*bZ$$cb>U {mꂹYB(Fzv6GYi |7wZb)o9xx6׏d^?HA&*S#jM!Mq` 8QIìX,edH#Z Ę3Axv?1B*-¡}kfϣ^ͧno R?NtD:)4tT댵0Hʌc@Uk~|gF_TeWWb%j;xOWG7?Sb߇%x:rܱU<36 riR,ǀ_daWa|Llne5$0'I˞gUVAӈ-4 r}UkVOM 9>+sC :cRvx/mzz8,&]o]|6!6-|)%wNhPiN!*CuI?CCtͮDjn0HLI=g&밁 ;.AOϢ?=4Nb^k̫ׯfH0_egƤgV%4wXAyTQ [[t] e˅y$8UR U-1~'|슫=l6SE!j.t}yx{ 2q]\e-'B^Ȑ|H2"qa/l2_'HZVR#oz 4Ei\Y+P h>j68:! [kZؼV]6pyD5S9ɳA?a'3CUhHJ H>9mhL\"<0}{$L\L1)_Hy.&Vj;.Jn@I}䊻K=\S}]wE7t̎ZK:Ko:(kg4Vf)ޟlgDvJb:T+DsNfR6Ϊٟm OJ.U\x~u*hqFZLJ^yGB{筸 U6;ݒ+-n{#Te *;ZZ]z*G`++,|}DdLQJ$4]j!>.R ۀ8Q\zKmB;o!7[RkɻO]W>uQ^;bО!,䕛h-d˻i ޭ|L;x3FڮiVYӻWnul Kqn{7Q:?w/өJ팫-jϻզn,䕛6@tuzsl @~~1JI3S6VWruqj/͈m2/V0}x(Ww}G8ZÉx`#i _bhO8#vi6# l[cڳr!"ٞٷPHpY$_{7]Ja#)dc[LȪ@HlЁ/NO֡#R-Dnx& U5G/HҤ7Rkv-3#nRq`nݕO35O_͠aaO=shwOp̿8_v@sCnlߒ U˞4xb@$K< =jg9IG:1&BH+"bQ"E˯wwu@x'I8 JrJI})5J"z+h+e;SoNX77[RKz+h+J9qx#vJI})dVzVID]]lV9^P^YhR@%ԕ.]hC-F۲Ueځq|  !a(.ziil^GK֑|~u8I [zk 3'D ^IlˊxtiQ]Inlq6%]]fdZ5mب fGkLk3:TIRTR|C`\dC4~u2_w:Wʔ}i.ZvGow;Ƴܰ?6okfSpEo2}tktTRLo*%,pROja.fnt1ϟ3kGmqL]I)]ܐ<srJ'aR>rX*V-{twкén\cn{-JEx[q!kVx(8@]MZ\дt DXtd5d*&gopn%> e_nYE>vusWCtb$+V7LŲڴ#Qjs!{(?~џ%5Yf'7kIwjɋ=?z}KkB]C.wqstGR()"XQ.X;~f['+[,e9=}(MH3C"&H!g2RQǚETY˥'!v(՞*!Df;r4_iFv0N8R Rʹ $64%4dSZOjZ[8p 4|? RȚJj ) _4\GpGwR8pe_q_2< K9!"ĆpyBQK8ђqDA(ҭ4g:\婔,WLF2IrH&T85<ϕJr1i.REPm0}wf1v)ڿ 6 b~I( Nk1"hT-mQWtOXoݜUܷ'vVND bJF`WYxv9tbcfSjT yy1gUesZ)e~VZH-8+ _R9nX)H?+-梷 R~Vүe'7Rs=e[)y%K[aR9Rs*!^%sJŒXM$ҋR*q)tM7RsIom//g6,^"Y)ȺKXᵷKR`|qԝR+fTXRRRdt>+=,͖Ԛlm~V jxN+Bj _=]2gL.!VʤRKBe[)3> W=%[|CΞs )RlJ%g˶R~VJ'AX'rO3%>lb/<@c%2 MFƱ\nH$K2It+s&D(Y&V i܈C[nLӨGWUA9RsKMPzO}2V׻RWZwo5:ZFzknT9Syw.vR4voOwQ~S+@tP(ijtM5|zzX]j6fl[ߋ遯YSVS,] n,Е^ŁOm>I:xv] I[ ' JJTJ+؅P::Z_Gm\ՇqV_c4fMjIiaahr^Ʈ}J=UɨjvjuN8eKלD>B'*[ʳJjV}Nube"Ә:=U"s垢2}AQ6 VvyV} H*TId u]Ϯ= U֪ͩ aEȦq 0 ř:+aUyd59ጄ>\P!88AA{ H[}|;eܭ^AJIeQrI"iR$h-t%}ۚ[-e wcch{ͫ@Kq>
    ~y㯟7:ٯl~ c~?<(%~;0r\B[ <6Nw( >f,).dT% jQjeWWSZ>>;]Lk-]>C˧L 9̐8KAB,AD#Tɹ!&T(D%pO>"*_>yƩB(raJ$S!FuJh{p{.2f _U>w54V;ӄBWJpwgr՜CrJog 7sJf|yJ;B rmҞ/5}YZNl_f&"sW@SdWGk8C8BBe]_'[b1}sue6"}VW_23Y|uffH'w6z\waC,ϣ>~]Cɴ|Y{-I O>* 7wc6"3l8s#ʵY0Fã8ϒ$V ,";I,okW(Q Yq& [ھ]# LJas)8& yeY-no=Gaы#@"Ag-"\&n;6~\12- ۺOharmxw7/##Ŷ2zV>V|Ba$wp0I~V ɔ}*{0(d>e탆`pI(v}szvY'd;5/aMFN4SYFARJDb9 5v'Sx&A5F*G^ =JYWWCJ7/^έ\S83NrjRYtLXQ)}8p [[ws-1طwJ6Fw@oή>P1z@rQ^V?~ϊ}??D#,@cZ#=C(Ts4m ;_NY}ҦVzʰh[h?5Sffc8^֛vt0#m5DEp>Vg\UBRRy&S8"R}{|[MB4JVw:-2Dޚl1$-ƹ5ѧnͣ-sk =(RU ]nQG0v>QȾY^?6s7PNDw"hYa\8|͏7Mwv k6j[o~zK7;=7Ƌq6?Rf(Ϯ]Y(K4WQ3I0E$*thpK'>m V^@yKK  %^IJR]Ix(Rw*b#@B.+8V3餽_B %ܳ|"soOyňp+pLPk 5JĄ`L 8ju.3qfiCrߔ@&h%wm͍XP*?d;[[3隞l^r5-KI'$-Q2)Im%tۢ@ss!7vm;>2^$;#$T4M L1IN)MBA!)x#E6@hƻ|Ye'¢fjȻh\g"xp{I0yKPw<`jl 2(ZJPeSŹHlT HL2E&j0~?jwF`DɾYml+*TZ`-O060=1B66lO[C!\tyÃ#uIrO61"X$uz.ՇnF|<n#4-J<9qt,VwM34}{w;!zJ^wfä78ϝ[зums4P 79>5'õHNv9=L꺕HrEO~ηrz$c5Pl^EyMtCyM\ZkBވ F>\Eܺ'4ޜDܺ5`~W ^_NE* 0h"3#)"HPHH2H® v޽C)4RTX)?`8MXD"N`#iza eT̋O׳D!Hb:bKMO&!WLg'ްqˣvdnB}Hpߜq 1cfl@K$ O]KHO..]v snNw 1n-2|J…ܫ7 x&cL:7>dǼ|ǠQW9!;E ֗X䝂CZZbg Bm$HS!sl4 &d34Eq>}/"mﱙL0grߎʯJvu]שu%Of|4'N5g[|xk\Sď1^Br~:w?6cd"!V0N4 "I*@,"84'`^ʺҙ,Y7 `c6)aPe'r~ј}u|G9[WF+d0[0q Fici'J+ lŃYU_BsbUߎM?磕z1vA&ylz'St&r/fL_JNγm^lNili^ VRg]hIIZYwqQf颮FLJڡbLNL!yQrS2MPe.V~$r{ʹC:bm aM/)(!%#"c..<慍X~35 P~\}w;M?{o?$.^}m=2^n0$.U-pc,/7ѰT"68Nb1jmROiUpgW]wUP/' A N؈3TK&LE)N4#"LROv<2蝍bw";F/ b{v|S}~$~C#а|Tc2&t&?Gk|F(6Q,#RE$JD:M##q=x*_#T8-݁&˝Tfdk~8|*%.F҇-oJ(|{Far5fi1j>,DDpPVAяxspZNAW}'OOonDFcͰJh)Gf*$ڳ)K‚Ky5q6c DiuMF x+Bd_'sQF(|VOf`v֨1Y$v7~j4a89:w|y3UC9Znm ǥ`[^YB}^/nalc%7t}oS$p:ZNjYXԵA%+q([h< ydJI=e9> ,&)I4 tQh gx-Y=Nyr`j|:.gdgGm1۫醍b&^=Q|i !IUHUĈd66~یL4߮XwvfmHIL8@PQXHjR)G^&PĒg!9ژc-ޯSE3hөp4 TtMwRV$p3L*DJ,fɑ}6[+G.ݷYΉWefٯP[٠z-$?afwAX~(AZ|>b"$ Cu,)R(adtDUᔃu47q* eP%MJ'" 9) (`9'iPJVN7I笎Kl.:dpt¤gCu:]e5jS0N#w7v")(chIK^Tq:)x^Y>b$:uWЃ\ #Y Zu°>=`ц=C\LNdCdt.<@4QnKז>&Xvz&*]>7JXآ=gOZ9Y;GOIwx S?N8cE."wWG(D}u,5FT~*WuMfK߬:= NZB}.'86Szq<9: |~)Ȳ2;Cfk8$^R4[Sʵ_a48vw\( */?0P'>Yp(?V&te/WiB6 5~>| WV?&#ti/ {[=8@.[<|{|+ǵƔ"G^!K\ k(@;K^cV ڨ`ݳ^R͠~{涍dfjCUgiR]l'[^TK"i_x `I`8i5ڑK@#ɞJի5@ooman1LydŽVכ^fwY_zrU!S('j5{ciݏ3I2[@ew|ƫUo`֜z |EaT tՐF޵Ķz/n㕕քh=YR h#,F\$> R&xpn'Bod\s fL2R=54'G* 85{:BΠLqX ذ8[3#tdM3]4] /K8H' W;~HyxA4'n\^^:gHe.{q< I8ih ~% cDοz0ЍHGvL\v!>J_r;C ["ZOynHq%DZ/n?io$Mdͼ%l{j.[߽wNӽV|ҡ7*FiaӘ 덮1"h .w M?g = 4̖H;Eo ƧHl':5DzڵVQc\;wsE,n&{ Z^)ANnmч {z0B1yQ=(upr?LҚ߻RT2Kt7=W1LsGa>_Y|JظvAx>4O)7isny&v(RZah ީ|:(l0b4б> "+( M$xYG< 64f8|7{.~y;~'3LKWGYdO`& ͽ7d#3|]]O//3!5 o).sJx ߎG_c3Jt-y}̓+߻ծ[Qܧx8Zݽu y>csE׃=htV[;7ȝIW :&4P|El6:x0,d<|vd?K=5A_ԕC0 `$XsfR4zM\YCNp/|%v001&5Bvi᝸o&WEgTu/WU;d}0(1= 铖2pg\fZջh|!@.B59C.|{m!*L7jfGQ-pԂd [_%?g)]d~2_ A߬&_OM30}Jt¿& %O p%Kea) r߿YL1o $~tj0m_0b.B-h?(=7Sm-z}>>[ˁ)oC0$ ! q,%CV Bj];(B$Kho`. [͐Sk3}r9.eC]Y鋝+[śkk7ED^n `/@ܜZ9s fsvq>B! wk\/rLY?f۹!c /_.yK6!Ifu')dʊE齏 /_D/ W.~6p̗$x §H`/J B c & +>JJü8Xb.c.|GQ,Q"c([+.4RK#4h): mj%g.KP%CP&"sYrrR5!1Z"uMr]4> o"y:q0E@iGױRsA4:rsV:떣Y>@v;UVpV8XuqiN"DF;^)[1v_II,)lP}EKIT- >Ffv32}a ]̣Yˇid9p2: چŻ~8܋ˋlßh#Pa{:%?D륙s7McqY>v{PL+<K:#QW@js^k|$] h_|NZk%@b%"lI)j N1E8Sd@9͡j"]ĩjk-[.BLcfF.s}1?mȡQ0Z;vZ4^aZs7'u74kAµ-M(捻P Gn3JTwʚ_j{ :Vn:׌,΋썀 Bڪy,rduEY+^y-_,;.3\ihӁB+ 8RFG+d'ʫ8uX0T.]Y =;&{$+ͫn~hEK3UCߨ]CIn,u^C$-'U,Lf_󴣜݆t):@Ns*FVaIR>bl*@н/.#d*wYŋ˒^ք {/ p5]CYfZ|붭'K p)=cQ,<6QM@@S-d {,X'a g N'9*"γ>\eǹVS!]dXB1QV{߯^8VB8l=[=Q°]3д#zjp>w؁] IcĆHΌ%BF Ӿ sQ@/st̩ .:c,&H~ a(X+D Xv;$7'JuOd|U*Xfs=ϻwY6ŵzM#ڵxW*jKZ(8TX)1\$nŻRA(kv>]i}BԀ1"Z*޵;'՜0ƋH eA}jH]9ci6ޝRXb.Zpw:o.%渰:ܱ\ 5ϟ,Tz %sN/ֹ=?E3o]<|N}Ѫ+ȟv^W Ѯ{UA0WZfɼ:{~.M_|RDZ1-lSͨtLwsȃQ=0|O>%q(eޞgOЗsb_tb6/,nHy't1w?D]ycv;nn 5D~wd0fOr4@[Pr/(19fQp)'o -0D1' v˅^("`P~L $B&IJ0WTF G:"QZ@rzaoP BX;c瘰ܸT b?V7BXcaKa: 4,X-KL@71qAH()V/DB^RH PcBac",01LD3jSQ!uCYhݗ@4bI ) uٜT0XBx>" v HhQl腉mC &H86ZѢ0c>@DN7,vw)f Ď%Ec #q챨T}[׃ (O'k@~ޞK6>f[1Mh򩤒s,i=y䱘iM2 TeSLf9Ru_Ѥ8Dڴ =<< Kt~3q ksqjpkcQ!ь?cTРɢ8K5iA-; lHSb%U$fA8Mp3ݦf2R1EKkGr;jxJ njs z9z uRB|&nF(G<|K!aTȄ ~(}PMutb5[!F?Ǎθ2>P#VS f=[sӂUAl֫[1ሿ4 ADŽˠh\йs&[?+ PzؒUU7onJr$`QVؓHB"+o`u`rXϖƝQnΎ UA">MVlOoʵ;ir.kOzVzʵ"F)bgslt F--QTcBץoPyK\{IVxcQO5Qi0oԀ".œ13 PX}=]L/&]rEwm8I@_z4пZ>ʪN }|vs$n!IgD+W6:꫗Qe_=`SΫVR?)Y3K!al9fh\\K,it!K_*t=/#OPMq0qw9eq]utǾds@B`ىT?%-ǁ4 t*]Ǩ g)D3P.4uɰap%}`HPALjƱf梑$J4P3$k]pƶ &ᎯHqy#cms }})1~X]PqvBeLV`!ɈSw{CJ "Cv;a%8h)4'$ no@ tZkg]D(-Fqv{//Nۈ #b).&+tcğ//yw75KP { p]q3eW֐+AP79dzܞjتnXOm ̰'= \M/h7c7>?5rn8˥I;}}ӋRݸ`[Or .=mjT̹9r)(i.?d4O5oo* z,x1z[SgʺڝofTiυbܩaDMa=xR8c\姸BxHqq|!tz]eb D -k2M =@a7NNJ?<)AQfa i?{WF-۲|?c.BB˯dJB:LNӞKx[-[->~d-94N+Q؇l]Wi{9ddmEFh@LSF3Rż3& s9nj"̮+{aPjV|Oe,DF jKB$x3"~W/tt=~'Z6PfuW//M]JA+ks:1cJjL)Q> o)ţ 9%K^@!d V.5:x={B5- rQ/C%L bT @5bCmqçJ(Y%4Y^ RU3{/E<ʄ.ZpC4$8eM1mGcΩm.:\vwRU+H!E)YT(]_(|ŧr}}Pq*ݸIE2cJI-e7`9쮖nݏ2ZojDT p8u)}R fqcJKI3uv% ɷyst4dCJ.SRR]|}[8Gxj& {xpSb8JK2 }5bvk6 .~Sd57wz $2Afݵ8h>ڄQZ<ɴ>\W++Iʗ鋛}C 0AU(j5U\;T܍3VV;f$5 OXn (Y0Z;vc'C+x[v%G !/jE;o~|QuTu5#A%@B(5ʷZ4}vB&y CbnS,6I)YQUH,rӉY~6!gSݥ$)@AcQ8V BŲۨ;h FyFxER /JmZcn{ItЋv+x 5:dokdaW L3_bsxFf4nC`?0{)BGM_g} TWKC65VM(#2Ē)U>ℤmd\ܡ[dm_6"ΰi'9&VUՔ#b#/OR݆dxӢOn.gܱքݨq̅ Jfs}rT &AvwVW ;e!jpNږj;6 𣝌cz?mLi}^[z6q.s>m'\qÎk yk bNˆ9be2׋ʲ svWC#ݐ1WSuᒱ4̢ Qg#VqhG=ylH]]A?-y0䧺,6X,">{ze=/kZ_It!:g!$S}$v1SeP?swBlɵYR p.jxqZ{N1&.ՠ+6 Z=!٫Bq)(o)neH_O%rPZ- Թi982TLm1,?HQ՝$J+Bl=z33Q9GG}pmv' f՛|4c#wИȜ ڝѤaiֺQt <:(s^feS~d#Ôݤ#~.!^ѐ>XaN-ew>sV:#=oXI&LN1~`mT4_|73}]^׸5J-Ͱ@dvړ"m&ZɒAsy~N%tn8Xph{wͮ' Mׯ}=!E}M' 7oD'Jɔ;O"NI.0x%qWgj&RKe5TR yISxT"Xo)!چAԵA\3mŻC~Yvi.(ӈOݟzZTjl]JXRiy+K,Yp Ǖ'ķg}uC &0 zdS*YJJz.(o:oѯ6osn+Zї.4 ˆ] U"jQfzv2ooKw)\V T&9dptZ@|'J|JF(Dma첞JG{:b"/\JƭRw>,LXEl*\{IC]ƒOB[+! M >EZ@[E}(u-EB ]i97y)%2)@6z: ߎo/Ow98{G6jpUX6AR"\ u.&={Y۾W?"Z Ğl)ӬL; I+Orl>E,8$8X{:ϟ7w~pTX ! W!/A]n$|wa_W*`}YŬBO^J bAm\z%.Gf#;Rb1s}4BcD7Z%$GI٪3%$cOo>-R0|T>J⭽ {{~њu} TF§~߳=PJ&۴2VJ+#,W%m蓅S/ȗb!:Eܭ>tȤ$hVh5/B@68Vsyke(%5!lPD=)I3*Z)kO+ZuP%% ݅g>MM.!B̽nmP][oG+_fwEK'}8Z)RE?CRQ4g$R`D 뫪Ks!pYr'ts #|)rJϢA B3vߪ-Һk sc3ޥ{$W2gC?.Ux Ӹ0 ?pcf^b*5&O]o>%xBW'J\Mw3@WdG|w/g}8A٘3:D"ц {TOILѝX$JdA$/+I+z=Էu=u ,ӱsIYtWҼ0T4`a(͹7roeqŚkf4xuB ,= %8t2ӆKx`u'6BԘGI䫚Z ToqXQJu.O˽זU/6\ ^.|n BK A@[k1ف!JK'i@htfzA ]O&p]XQb M%Bk)ك5m Slet'TRAd%tY8 'rArc4$`ZW|ޑbj PUy HZ)an2ZM*4W "R)S&nσ\bK֑,p%i!%9djJ/9WZ~)h9$ I?IaP`@[ͬ>["&_P?Iw_#<}blۉo8Vy`s,xͰJ#2%\:sר-`9 S+1&'GjqxϺi:ϑ差'"@-1`$'TPΌf޹~Fi4B](vc."#4F` $2TTP JsoKqYFo-F?a{q)j*YcR֓\Pi,9ߝRߟID= ͋3b/VUT\;y=v QMZP3npT֔|w b*@]RRQwRb2n OrY K9*$-=Oң_$M$"FFdYʁtYÂ&;'gR:Ht9Qg6E&4 ."񮛳TDNk؟[ȗ/99't2(<*տtyBKwPEӜ +5>`*hF#hb(2^\ی(ЌНWc;"OVBbCVf:M.uy}ۼ|wq<.p>Cˑ]-ĀD[ ]#=>JM ^Y{3̩-lh:n)|g`Bk;WF_{xQs:\S|#A}$@ʙg0eKPi)_>Hߟ]Or C1εGD`VMK)#/(ͱT{Έ56eDv"#hPsYn m(n$$pf=JxmXDU@ ۔ +>᧼5LDVE#)p1j#0A 㚼`86`IF`bu `mҹFpQFt/6#\7Lj|/ɟ.Sa#.0#EK1.%&5[T}75R䔠GRqlvo^Y]*|# SsAE|tY|vHH.ql^Ȝz%OՑy$CL:9J(e,XsSF%\KJv^w *|.h炖aYgȝSL ץ$@qD.sSWEԝ _i͝ `UKۢgQ Gh" X1 |딓t!P! [Q̄z>pzdQD5\5;k$G- he|`ycm~XZSބ\"1\,#§RTU-TuL=lmpaZlV"AyR8rUGVth.AvVCREsƴ(Y*()g#W@_{%ƃ[/n`TdR#K;F_y;I~\Lڭ>OcV)=PwzÃxv|"0D-F\ӀKLFH"F[Pȇj||nQ5 JՄ]XJ@8'Wn1x~9'0'p߂0qY#z Sņ' Uċzb,K-X.\$qGI <ޟS`E /Vi; V^}\\6z4fӝ EECY.vubE:+4 4샫W)4_,(.nWЦԻDYȈ88⠽oGJ5ʀEMۭ7y5 oS*TC{u3n%N{+gݚ=Rv̽a୬L+ZTccAb)^a 0꓈3nj`"EG뷘Fؓs|>c=0 8ҳ?9OqHf"#Z?٨sITeKVq%8ś{L96+js TؾmY"uGRh|;DE8Wkk(=(񞱠,X)_-96<\ gz[n ylv^`(ܦӭ0*`_^/900~O6/YWX|w!)a?0zUlh<ʹ7U\=׿n1:3m$dc{G{?Q&Wy=aN!pߦnO2p>x+3\S/tUn~]{W5*V3Ҋ99|EɂskzQ{:8?9KS@QCosiQ)WBr)ja|+88,W G%_cbJ7bm_NhE/œ˛>h:W.U&]o0z%w(_c[a-*fl~˒7G=4<<8z|S<7Yp53Q /y*x{.7cf}:F*2t fZzUZrrlvx-nҺ ˥qvjujvLن NvE;:^Vo8$f}=~&k9[q?tZ?}igAgkOO-8-kHZu[Kq5wI+JJ&}L,Q'0aAkjF::=p--d[3I=Ƙ5czwX;ڗ{qLl&dk~t_1V¼Wk޶ܐB?a6Tҡ}\܄Xo{o'~1夷Y}8Fͨմ"մ03qqf%AݸgQ6s79Z5*efl T6].;O|oYAj|!&m7~E E$Џv6C3ZZ9Y#ŋj2~Cvq9A)k6` h=e3HqKKRt5k 8{V}QS@1 ŗ =n*iM߁4/(Uj.(L2Yz~e@,<ש F4y9F ߕgptzȜbz˧>pG.w| !+(iֶYr?!n](:~h^En:ɸ~\ fffsހ(H%*$" T ǘ.p")5$C2_L Ca i'y[u:p3(Zgg2 &Ei\N [w^];~^zee)&B t$:_&E[RN0ƞDo៳$%I *g"gz㸑_g3#^?,`͉lr$mgF SI=7=JGt_U쯪`Md rvWTh VflJl5G% rTZ!='FűRQ༛s" wPFR)'AKCEBƥpI '4h]UiNv=չ jRv-b4aIb7RkO(b|ĵL/҆+j! jso˜YJʉQF)<`43]Iot**R%3ƍFђR-*! h*O 8ـ2a'(z(# *:^&p-M8($Ft zEB;-qZ;`kP¥l,FoOgi9ow(b,v3 fQwv~Mn:}}r٫v2p|{qu9.{R&vjS )\q*r{w~rY"V fzW918$6 mL+*)X';E $k C[̟۹C/k޲~;?Frr2x/_Hic苹z :p&q9TjQ9 ;V O?J?(w:"AAV}JoxD+I'G}jyuiۗ-V#Tolngvvb٫:o _κxWGwҥ4gF-.,Nmzp)nCCSeˎ618 z_]J~>MQ?VmYμE,̓<@Mc:h2Gf*&D,"=ә#~I 2H+\+ IlM~w.>ƀh.W.8@e (qd0NN})&S~fB7iDn;h+8k }m!4uEi)\\} wpP0j\b(^tw[-T}b& Z-Zh̸2Qѝ~å[DqQkFMo1^& Q(E/U[iE1'iQs* S ?TO7 {Zj"257ӿ(|<ˇ.5s4%N)/Y o΃MnGWZ^8J'%-i>9dgtdgK&diI޻?4;|R]M#=3Ww[hɁJ)9'ixר!3 wnq1p2crcb1`2VZ>Xth~-P C)%Ҏbq x8UD" 9Ne ѨT=bїMaͭ]TJP.%Zf֊F/A_)E%1ZUO+QN5O*sVnZeηD7h-a8~Yo#F`ՙ>g1\5{稴|X˹d]<G8#z@*[\=#X+fVǯt NmNQ).^2{.rJ\ qXBfpDwc-DQzV{W78R69֘QmOp9zAf6ӫx^q:ث[O}jףOQfn+|Kb R޵꿽] J7¬(y69 T2Rundtڶ~Ӷz~3MojA?'b!wK>eYB{Nw>yCRi7E:/{eMfQXR rTmN1L̺O4׺u!߸锆7@nN3X$68L}[Z.4Wѽu }&*/OfzbvgeդKtw0;x<.5KXA^uc ncc͜b㚡v h [!ZA;t#Db -)20UYA whPa:_'k|,cAEWF9"Ibmv\Yau^}X,d~Cs})C֧Y aݻ#7\{!{v90wAQ6T>}Ww9*~FmuMr~5.0-x̷Lf~}3g)mB_J+O9ighGW2g8i7d|dT{$zzz۴_D W[;ٱVAHҺkܭw^09/|DY=^1̰̌志Ⱦ[Fr _(Z;XTEѣ6@2g*o &GGgƢkq݃,@_!@w#ݘnK&h4r:˕餂`r^h<1qN_f뾕x/ȁv1$ }^"`duGQ+S*%pj2t{z?)d6G9o{-3 2fdv`3u~ή}>1# 8^Qͨ\\dKV[Ź0:bT(]TYL.v,hbĐt=mN]re 18̓FA'@7<{;AuDz>o{2;%S"~J˩dSɖ 3hGr*$' 0Оŧaly/+=K+QŸf\ɂrاr4rꎵPɔvíl? ۻ?&wyR||+MyДQj1ZkNTsO *fpǕ "~ |Dv3$xF>;?Oqg2㤱TH֗]ʱ [ Q?v㰝~ejPբ9H>F)7@%}I>íURc Mz5ku8Ng"4J.6F1t,Q9|z@ߗ$!.!@Xc#9QP騝`24)Ȫ* ]` e{QLg.Q+&VT:UicA3NW$H-]7@ o>,ٟ59W܍qM5gcfZa*1|>bt JGm9O+Ͷ G0lؑ4>|Xћw>\^)C{|8P'?% ɡYl/9 <-"i$VW)JP(}4bdh8PX)ћV~kH41"x [DEҭҍcҭ wR?6J-R.m뤋Pz-Ub9*1<8\#$!#"i&@fCo"v2 AEqWMp)H^0 @5:O;`d iw8+I% r vC,8.b4H1=gI':yS^/J\DcgMdcB3F,+GaǗ(Jzvh kfq,_,~N|i|ӿd ,ޔ-+yq>Th: ^0W:EkNyFbJSĔc0 xf%7`m_l pb[F"./Yg.INJDz^V[W2j&v+գشb=NLJ3X!%XbAw~t'A8X^ ?n3i>(_$/U7ݣSV5X?^Y{n~=O,LC]t'0:nګg?ޜt6_M1K#B^5sngS߹Z"%^mlЩ@SݷGA (hy2wJ:1w8n&ĺ]gp̂n~g-dR{#-crx&u^Fi>qtO:y̨z]p\z Tz#ck>8B|ޛ(֩+W-+o[Ylra0@i MJQ{*{[/G[L(-uRzoʶRm}=Ha<ֿu hu/Chd7xTͮ8xlrb _<'w^d"28Y_+\E렇K{36pc{9mY3 !Xo|`HˍA h Y0gڋ#7Sz0b o'var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005160323515136147646017717 0ustar rootrootJan 27 13:44:00 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 13:44:00 crc restorecon[4680]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:00 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:44:01 crc restorecon[4680]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:44:01 crc restorecon[4680]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 13:44:02 crc kubenswrapper[4914]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 13:44:02 crc kubenswrapper[4914]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 13:44:02 crc kubenswrapper[4914]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 13:44:02 crc kubenswrapper[4914]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 13:44:02 crc kubenswrapper[4914]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 13:44:02 crc kubenswrapper[4914]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.080346 4914 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083129 4914 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083143 4914 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083147 4914 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083151 4914 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083155 4914 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083159 4914 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083163 4914 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083167 4914 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083170 4914 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083174 4914 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083177 4914 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083180 4914 feature_gate.go:330] unrecognized feature gate: Example Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083184 4914 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083189 4914 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083193 4914 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083197 4914 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083209 4914 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083213 4914 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083217 4914 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083221 4914 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083224 4914 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083228 4914 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083232 4914 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083235 4914 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083239 4914 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083243 4914 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083246 4914 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083250 4914 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083254 4914 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083258 4914 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083261 4914 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083264 4914 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083268 4914 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083272 4914 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083277 4914 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083282 4914 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083286 4914 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083290 4914 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083295 4914 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083299 4914 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083303 4914 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083306 4914 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083310 4914 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083313 4914 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083317 4914 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083320 4914 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083324 4914 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083327 4914 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083332 4914 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083335 4914 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083339 4914 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083343 4914 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083346 4914 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083349 4914 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083353 4914 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083356 4914 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083360 4914 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083364 4914 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083369 4914 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083373 4914 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083377 4914 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083380 4914 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083384 4914 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083387 4914 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083391 4914 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083394 4914 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083397 4914 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083401 4914 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083404 4914 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083408 4914 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.083413 4914 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084758 4914 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084772 4914 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084779 4914 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084785 4914 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084820 4914 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084825 4914 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084843 4914 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084849 4914 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084853 4914 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084857 4914 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084862 4914 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084866 4914 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084870 4914 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084874 4914 flags.go:64] FLAG: --cgroup-root="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084878 4914 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084882 4914 flags.go:64] FLAG: --client-ca-file="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084886 4914 flags.go:64] FLAG: --cloud-config="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084890 4914 flags.go:64] FLAG: --cloud-provider="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084894 4914 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084900 4914 flags.go:64] FLAG: --cluster-domain="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084903 4914 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084907 4914 flags.go:64] FLAG: --config-dir="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084911 4914 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084916 4914 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084923 4914 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084927 4914 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084931 4914 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084935 4914 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084939 4914 flags.go:64] FLAG: --contention-profiling="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084943 4914 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084947 4914 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084951 4914 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084955 4914 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084961 4914 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084966 4914 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084970 4914 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084974 4914 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084978 4914 flags.go:64] FLAG: --enable-server="true" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084982 4914 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084989 4914 flags.go:64] FLAG: --event-burst="100" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.084995 4914 flags.go:64] FLAG: --event-qps="50" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085004 4914 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085011 4914 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085016 4914 flags.go:64] FLAG: --eviction-hard="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085023 4914 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085029 4914 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085034 4914 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085039 4914 flags.go:64] FLAG: --eviction-soft="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085045 4914 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085052 4914 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085058 4914 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085063 4914 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085069 4914 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085074 4914 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085078 4914 flags.go:64] FLAG: --feature-gates="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085084 4914 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085088 4914 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085092 4914 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085096 4914 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085101 4914 flags.go:64] FLAG: --healthz-port="10248" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085105 4914 flags.go:64] FLAG: --help="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085109 4914 flags.go:64] FLAG: --hostname-override="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085113 4914 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085117 4914 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085121 4914 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085125 4914 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085129 4914 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085133 4914 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085137 4914 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085141 4914 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085145 4914 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085149 4914 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085154 4914 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085158 4914 flags.go:64] FLAG: --kube-reserved="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085162 4914 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085166 4914 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085170 4914 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085174 4914 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085179 4914 flags.go:64] FLAG: --lock-file="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085182 4914 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085188 4914 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085192 4914 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085199 4914 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085203 4914 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085207 4914 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085211 4914 flags.go:64] FLAG: --logging-format="text" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085215 4914 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085219 4914 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085223 4914 flags.go:64] FLAG: --manifest-url="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085227 4914 flags.go:64] FLAG: --manifest-url-header="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085233 4914 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085237 4914 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085242 4914 flags.go:64] FLAG: --max-pods="110" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085246 4914 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085250 4914 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085254 4914 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085258 4914 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085262 4914 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085266 4914 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085271 4914 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085282 4914 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085286 4914 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085290 4914 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085295 4914 flags.go:64] FLAG: --pod-cidr="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085299 4914 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085305 4914 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085309 4914 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085314 4914 flags.go:64] FLAG: --pods-per-core="0" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085318 4914 flags.go:64] FLAG: --port="10250" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085323 4914 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085327 4914 flags.go:64] FLAG: --provider-id="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085331 4914 flags.go:64] FLAG: --qos-reserved="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085335 4914 flags.go:64] FLAG: --read-only-port="10255" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085339 4914 flags.go:64] FLAG: --register-node="true" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085343 4914 flags.go:64] FLAG: --register-schedulable="true" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085348 4914 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085363 4914 flags.go:64] FLAG: --registry-burst="10" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085367 4914 flags.go:64] FLAG: --registry-qps="5" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085372 4914 flags.go:64] FLAG: --reserved-cpus="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085375 4914 flags.go:64] FLAG: --reserved-memory="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085381 4914 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085385 4914 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085389 4914 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085393 4914 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085398 4914 flags.go:64] FLAG: --runonce="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085402 4914 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085406 4914 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085410 4914 flags.go:64] FLAG: --seccomp-default="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085414 4914 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085419 4914 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085423 4914 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085428 4914 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085432 4914 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085437 4914 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085441 4914 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085445 4914 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085450 4914 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085455 4914 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085459 4914 flags.go:64] FLAG: --system-cgroups="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085464 4914 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085472 4914 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085476 4914 flags.go:64] FLAG: --tls-cert-file="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085480 4914 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085485 4914 flags.go:64] FLAG: --tls-min-version="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085490 4914 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085494 4914 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085498 4914 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085502 4914 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085506 4914 flags.go:64] FLAG: --v="2" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085512 4914 flags.go:64] FLAG: --version="false" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085518 4914 flags.go:64] FLAG: --vmodule="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085523 4914 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085527 4914 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085621 4914 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085625 4914 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085629 4914 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085633 4914 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085637 4914 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085641 4914 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085644 4914 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085648 4914 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085654 4914 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085657 4914 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085661 4914 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085665 4914 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085669 4914 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085672 4914 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085676 4914 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085679 4914 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085683 4914 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085686 4914 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085690 4914 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085693 4914 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085698 4914 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085703 4914 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085708 4914 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085712 4914 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085716 4914 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085720 4914 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085724 4914 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085729 4914 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085732 4914 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085736 4914 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085740 4914 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085743 4914 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085746 4914 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085750 4914 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085754 4914 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085757 4914 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085760 4914 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085764 4914 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085768 4914 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085771 4914 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085775 4914 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085778 4914 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085782 4914 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085786 4914 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085789 4914 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085793 4914 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085796 4914 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085799 4914 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085803 4914 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085807 4914 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085810 4914 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085813 4914 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085817 4914 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085821 4914 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085842 4914 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085847 4914 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085851 4914 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085855 4914 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085859 4914 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085863 4914 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085867 4914 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085870 4914 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085874 4914 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085877 4914 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085881 4914 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085884 4914 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085888 4914 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085891 4914 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085895 4914 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085898 4914 feature_gate.go:330] unrecognized feature gate: Example Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.085901 4914 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.085914 4914 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.096922 4914 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.096962 4914 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097049 4914 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097060 4914 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097065 4914 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097070 4914 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097075 4914 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097080 4914 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097085 4914 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097089 4914 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097094 4914 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097098 4914 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097102 4914 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097106 4914 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097111 4914 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097115 4914 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097119 4914 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097124 4914 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097128 4914 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097133 4914 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097137 4914 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097141 4914 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097147 4914 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097153 4914 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097157 4914 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097162 4914 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097166 4914 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097171 4914 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097174 4914 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097178 4914 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097181 4914 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097185 4914 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097188 4914 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097192 4914 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097195 4914 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097198 4914 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097203 4914 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097206 4914 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097210 4914 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097214 4914 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097222 4914 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097226 4914 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097229 4914 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097234 4914 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097237 4914 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097241 4914 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097244 4914 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097249 4914 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097253 4914 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097257 4914 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097261 4914 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097265 4914 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097270 4914 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097274 4914 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097278 4914 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097282 4914 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097286 4914 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097290 4914 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097294 4914 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097298 4914 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097302 4914 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097305 4914 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097308 4914 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097312 4914 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097315 4914 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097320 4914 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097324 4914 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097328 4914 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097332 4914 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097336 4914 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097340 4914 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097344 4914 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097348 4914 feature_gate.go:330] unrecognized feature gate: Example Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.097357 4914 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097460 4914 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097467 4914 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097471 4914 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097475 4914 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097478 4914 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097481 4914 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097485 4914 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097488 4914 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097492 4914 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097495 4914 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097502 4914 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097507 4914 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097511 4914 feature_gate.go:330] unrecognized feature gate: Example Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097515 4914 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097520 4914 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097523 4914 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097528 4914 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097532 4914 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097536 4914 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097539 4914 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097542 4914 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097546 4914 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097549 4914 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097552 4914 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097556 4914 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097560 4914 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097563 4914 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097566 4914 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097570 4914 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097573 4914 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097576 4914 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097580 4914 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097583 4914 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097587 4914 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097591 4914 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097594 4914 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097598 4914 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097601 4914 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097604 4914 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097608 4914 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097611 4914 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097614 4914 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097618 4914 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097622 4914 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097625 4914 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097629 4914 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097632 4914 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097636 4914 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097639 4914 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097644 4914 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097648 4914 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097652 4914 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097656 4914 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097659 4914 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097663 4914 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097667 4914 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097670 4914 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097674 4914 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097677 4914 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097680 4914 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097684 4914 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097687 4914 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097691 4914 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097694 4914 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097698 4914 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097701 4914 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097704 4914 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097708 4914 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097712 4914 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097716 4914 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.097720 4914 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.097726 4914 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.097929 4914 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.104617 4914 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.104856 4914 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.106548 4914 server.go:997] "Starting client certificate rotation" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.106585 4914 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.106765 4914 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-12 22:39:06.823356972 +0000 UTC Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.106856 4914 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.129101 4914 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 13:44:02 crc kubenswrapper[4914]: E0127 13:44:02.131736 4914 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.133182 4914 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.154557 4914 log.go:25] "Validated CRI v1 runtime API" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.190980 4914 log.go:25] "Validated CRI v1 image API" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.192908 4914 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.197305 4914 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-13-39-18-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.197341 4914 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.210146 4914 manager.go:217] Machine: {Timestamp:2026-01-27 13:44:02.208706256 +0000 UTC m=+0.521056362 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654116352 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b46996ba-6bdd-421e-afd7-e88de2c05d29 BootID:096812c4-5121-428c-9502-97f27967ca56 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:58:50:ac Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:58:50:ac Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:cf:2b:f1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:47:c5:5f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:fa:46:b1 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:5b:28:96 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:06:3f:c2:11:99:e8 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ea:1d:27:7a:f6:54 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654116352 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.210406 4914 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.210521 4914 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.211195 4914 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.211354 4914 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.211389 4914 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.211960 4914 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.211978 4914 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.212343 4914 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.212382 4914 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.212594 4914 state_mem.go:36] "Initialized new in-memory state store" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.212707 4914 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.219931 4914 kubelet.go:418] "Attempting to sync node with API server" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.219995 4914 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.220087 4914 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.220111 4914 kubelet.go:324] "Adding apiserver pod source" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.220131 4914 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.225137 4914 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.225413 4914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.225464 4914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:02 crc kubenswrapper[4914]: E0127 13:44:02.225509 4914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:44:02 crc kubenswrapper[4914]: E0127 13:44:02.225516 4914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.226192 4914 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.229265 4914 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.231057 4914 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.231104 4914 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.231120 4914 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.231134 4914 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.231155 4914 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.231168 4914 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.231182 4914 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.231204 4914 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.231219 4914 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.231233 4914 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.231251 4914 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.231264 4914 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.233165 4914 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.233767 4914 server.go:1280] "Started kubelet" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.234001 4914 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.233993 4914 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.234564 4914 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.234673 4914 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.235585 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.235617 4914 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 13:44:02 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.235799 4914 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.235810 4914 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.235860 4914 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.235807 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:43:34.218437837 +0000 UTC Jan 27 13:44:02 crc kubenswrapper[4914]: E0127 13:44:02.235930 4914 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.236383 4914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:02 crc kubenswrapper[4914]: E0127 13:44:02.236462 4914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.236480 4914 factory.go:55] Registering systemd factory Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.236498 4914 factory.go:221] Registration of the systemd container factory successfully Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.236806 4914 server.go:460] "Adding debug handlers to kubelet server" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.236826 4914 factory.go:153] Registering CRI-O factory Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.236854 4914 factory.go:221] Registration of the crio container factory successfully Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.236915 4914 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.236938 4914 factory.go:103] Registering Raw factory Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.236954 4914 manager.go:1196] Started watching for new ooms in manager Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.237634 4914 manager.go:319] Starting recovery of all containers Jan 27 13:44:02 crc kubenswrapper[4914]: E0127 13:44:02.237610 4914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="200ms" Jan 27 13:44:02 crc kubenswrapper[4914]: E0127 13:44:02.243880 4914 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.245:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e9a605ac7137c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 13:44:02.233717628 +0000 UTC m=+0.546067743,LastTimestamp:2026-01-27 13:44:02.233717628 +0000 UTC m=+0.546067743,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.255662 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.255729 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.255767 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.255778 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.255790 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.255871 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.255900 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.255924 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.255949 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.255972 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256005 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256017 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256032 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256075 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256100 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256126 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256152 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256204 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256217 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256229 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256263 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256290 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256303 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256329 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256358 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256409 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256442 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256458 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256483 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256510 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256536 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256565 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256592 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256639 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256654 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256666 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.256679 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257213 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257239 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257251 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257261 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257271 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257281 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257290 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257301 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257310 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257319 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257328 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257341 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257352 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257363 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257373 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257387 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257398 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257409 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257420 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257429 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257440 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257457 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257468 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257479 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257491 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257501 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257511 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257522 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257532 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257542 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257551 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257560 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257570 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257579 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257589 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257599 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257609 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257620 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257629 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257640 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257652 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257662 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257672 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257683 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257693 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257704 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257716 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257727 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257736 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257746 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257776 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257787 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257796 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257808 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257819 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257844 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257857 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257871 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257884 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257897 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257910 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257923 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257935 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257947 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257959 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257971 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.257985 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258004 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258019 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258033 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258047 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258060 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258075 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258089 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258103 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258118 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258141 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258156 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258169 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258181 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258192 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258202 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258214 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258224 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258235 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258248 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258260 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258275 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258287 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258298 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258310 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258324 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258337 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258349 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258361 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258374 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258386 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258397 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258411 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258423 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258435 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258446 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258458 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258468 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258483 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258494 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258507 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258520 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258531 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258542 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258554 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258565 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258576 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258587 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258598 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258608 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258618 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258629 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258639 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258649 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258661 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258671 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258683 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258698 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258763 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258780 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258791 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258802 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258814 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258824 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258895 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258907 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.258919 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.260977 4914 manager.go:324] Recovery completed Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262517 4914 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262583 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262606 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262630 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262645 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262658 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262678 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262694 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262706 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262721 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262735 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262747 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262760 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262777 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262793 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262810 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262824 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262883 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262896 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262910 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262924 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262938 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262952 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262967 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262980 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.262994 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.263019 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.263034 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.263047 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.263061 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.263077 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.263090 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.263105 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.263120 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.263142 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.263156 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.263169 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.263183 4914 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.263196 4914 reconstruct.go:97] "Volume reconstruction finished" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.263206 4914 reconciler.go:26] "Reconciler: start to sync state" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.269223 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.271396 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.271450 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.271464 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.274802 4914 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.274820 4914 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.274953 4914 state_mem.go:36] "Initialized new in-memory state store" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.290177 4914 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.292245 4914 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.292977 4914 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.293036 4914 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 13:44:02 crc kubenswrapper[4914]: E0127 13:44:02.293109 4914 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.294187 4914 policy_none.go:49] "None policy: Start" Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.294344 4914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:02 crc kubenswrapper[4914]: E0127 13:44:02.294415 4914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.294751 4914 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.294781 4914 state_mem.go:35] "Initializing new in-memory state store" Jan 27 13:44:02 crc kubenswrapper[4914]: E0127 13:44:02.336443 4914 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.359020 4914 manager.go:334] "Starting Device Plugin manager" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.359086 4914 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.359103 4914 server.go:79] "Starting device plugin registration server" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.359585 4914 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.359605 4914 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.359780 4914 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.359879 4914 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.359895 4914 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 13:44:02 crc kubenswrapper[4914]: E0127 13:44:02.368533 4914 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.393339 4914 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.393495 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.394949 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.394994 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.395009 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.395236 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.396042 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.396091 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.396767 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.396784 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.396795 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.396804 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.396808 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.396813 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.396945 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.397132 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.397169 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.397513 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.397539 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.397549 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.397642 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.397794 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.397848 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.398149 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.398172 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.398182 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.398210 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.398228 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.398239 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.398280 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.398384 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.398414 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.398439 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.398449 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.398439 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.398860 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.398881 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.398891 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.399093 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.399124 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.399962 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.399995 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.400006 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.400076 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.400091 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.400102 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:02 crc kubenswrapper[4914]: E0127 13:44:02.438701 4914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="400ms" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.459677 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.460848 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.460876 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.460887 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.460929 4914 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 13:44:02 crc kubenswrapper[4914]: E0127 13:44:02.461391 4914 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.245:6443: connect: connection refused" node="crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.465011 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.465058 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.465085 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.465106 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.465126 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.465155 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.465179 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.465215 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.465251 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.465309 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.465344 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.465376 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.465404 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.465431 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.465457 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566094 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566145 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566176 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566192 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566212 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566229 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566247 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566273 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566302 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566314 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566336 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566320 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566370 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566373 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566379 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566389 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566337 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566372 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566334 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566356 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566402 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566303 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566425 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566451 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566493 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566535 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566561 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566643 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566663 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.566701 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.662068 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.663805 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.663873 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.663887 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.663916 4914 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 13:44:02 crc kubenswrapper[4914]: E0127 13:44:02.664375 4914 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.245:6443: connect: connection refused" node="crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.737013 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.756047 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.763349 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.782088 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: I0127 13:44:02.788708 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.796558 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-ca95cacc5b65bfda4d88e6d3ed10bff4cfe630d8908e9000068b7636b3c69286 WatchSource:0}: Error finding container ca95cacc5b65bfda4d88e6d3ed10bff4cfe630d8908e9000068b7636b3c69286: Status 404 returned error can't find the container with id ca95cacc5b65bfda4d88e6d3ed10bff4cfe630d8908e9000068b7636b3c69286 Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.806165 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-3e303ac021e6239a126360407f4056998dbfb9fdb8c337331e8ad9ba7e63fea5 WatchSource:0}: Error finding container 3e303ac021e6239a126360407f4056998dbfb9fdb8c337331e8ad9ba7e63fea5: Status 404 returned error can't find the container with id 3e303ac021e6239a126360407f4056998dbfb9fdb8c337331e8ad9ba7e63fea5 Jan 27 13:44:02 crc kubenswrapper[4914]: W0127 13:44:02.810967 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-4255d51d01cf10f7feeb5238c98970daad54798963d6c0efe4b2c618b1bbff6e WatchSource:0}: Error finding container 4255d51d01cf10f7feeb5238c98970daad54798963d6c0efe4b2c618b1bbff6e: Status 404 returned error can't find the container with id 4255d51d01cf10f7feeb5238c98970daad54798963d6c0efe4b2c618b1bbff6e Jan 27 13:44:02 crc kubenswrapper[4914]: E0127 13:44:02.840164 4914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="800ms" Jan 27 13:44:03 crc kubenswrapper[4914]: E0127 13:44:03.022805 4914 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.245:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e9a605ac7137c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 13:44:02.233717628 +0000 UTC m=+0.546067743,LastTimestamp:2026-01-27 13:44:02.233717628 +0000 UTC m=+0.546067743,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 13:44:03 crc kubenswrapper[4914]: W0127 13:44:03.040565 4914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:03 crc kubenswrapper[4914]: E0127 13:44:03.040638 4914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.065036 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.066916 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.066952 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.066963 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.066988 4914 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 13:44:03 crc kubenswrapper[4914]: E0127 13:44:03.067413 4914 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.245:6443: connect: connection refused" node="crc" Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.235418 4914 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.236454 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 07:56:36.296522306 +0000 UTC Jan 27 13:44:03 crc kubenswrapper[4914]: W0127 13:44:03.249193 4914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:03 crc kubenswrapper[4914]: E0127 13:44:03.249271 4914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.297490 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4255d51d01cf10f7feeb5238c98970daad54798963d6c0efe4b2c618b1bbff6e"} Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.298469 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3e303ac021e6239a126360407f4056998dbfb9fdb8c337331e8ad9ba7e63fea5"} Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.299349 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ca95cacc5b65bfda4d88e6d3ed10bff4cfe630d8908e9000068b7636b3c69286"} Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.300153 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"26e1dbb464f78a3888161a2b2db417bdb53367d90d599c5ed642dfeb9f23dffb"} Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.302557 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0896ff73144f48b8dbf4408d5e3d02f9c996cd154902923ff620a22a1bf05626"} Jan 27 13:44:03 crc kubenswrapper[4914]: W0127 13:44:03.429228 4914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:03 crc kubenswrapper[4914]: E0127 13:44:03.429320 4914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:44:03 crc kubenswrapper[4914]: W0127 13:44:03.567473 4914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:03 crc kubenswrapper[4914]: E0127 13:44:03.567574 4914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:44:03 crc kubenswrapper[4914]: E0127 13:44:03.641091 4914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="1.6s" Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.868539 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.871405 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.871472 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.871484 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:03 crc kubenswrapper[4914]: I0127 13:44:03.871521 4914 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 13:44:03 crc kubenswrapper[4914]: E0127 13:44:03.872545 4914 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.245:6443: connect: connection refused" node="crc" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.235914 4914 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.236910 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:32:26.345182111 +0000 UTC Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.308229 4914 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="0e572c5db026705caec1874b3946e3def2eb193b8aad8d2e199e20dfbbbbd4a1" exitCode=0 Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.308285 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"0e572c5db026705caec1874b3946e3def2eb193b8aad8d2e199e20dfbbbbd4a1"} Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.308343 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.309927 4914 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00" exitCode=0 Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.310008 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.310157 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.310186 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.310198 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.310010 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00"} Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.310815 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.310865 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.310874 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.312411 4914 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 13:44:04 crc kubenswrapper[4914]: E0127 13:44:04.313274 4914 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.317642 4914 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221" exitCode=0 Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.317745 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221"} Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.317762 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.326630 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.326672 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.326687 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.328176 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.328998 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.329026 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.329038 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.345681 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.345602 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4"} Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.345817 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8"} Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.345848 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3"} Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.345862 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7"} Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.346775 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.346882 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.347091 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.354257 4914 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17" exitCode=0 Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.354313 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17"} Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.354388 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.355279 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.355323 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:04 crc kubenswrapper[4914]: I0127 13:44:04.355337 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:05 crc kubenswrapper[4914]: W0127 13:44:05.015711 4914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:05 crc kubenswrapper[4914]: E0127 13:44:05.015793 4914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.236163 4914 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.237166 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:51:55.417613359 +0000 UTC Jan 27 13:44:05 crc kubenswrapper[4914]: E0127 13:44:05.241731 4914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="3.2s" Jan 27 13:44:05 crc kubenswrapper[4914]: W0127 13:44:05.354502 4914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:05 crc kubenswrapper[4914]: E0127 13:44:05.354591 4914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.358364 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0a33f463899dafa02c42203728101686cc7171af93777439ce8bbf7feb13fb8d"} Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.358401 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d3a165c9c9662f86e2a4031b517de6ca06d8fd0d5b946e8e777c4aff95601706"} Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.358407 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.358415 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"13b755de4898a164ac9577098c88092f6c7110e89fb07df414a9a1b7010598fc"} Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.359209 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.359231 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.359239 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.360699 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc"} Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.360724 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb"} Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.360733 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570"} Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.360742 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250"} Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.362144 4914 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de" exitCode=0 Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.362222 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de"} Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.362239 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.363103 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.363126 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.363136 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.363746 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"42bb4f493515c0d2dfc9e64dba4f32177efb75d52cb95be919d2d63b0c4948dc"} Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.363765 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.363807 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.364661 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.364689 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.364698 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.364666 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.364729 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.364743 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.472637 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.473864 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.473899 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.473908 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:05 crc kubenswrapper[4914]: I0127 13:44:05.473933 4914 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 13:44:05 crc kubenswrapper[4914]: E0127 13:44:05.474364 4914 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.245:6443: connect: connection refused" node="crc" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.235476 4914 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.237663 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:27:43.964005466 +0000 UTC Jan 27 13:44:06 crc kubenswrapper[4914]: W0127 13:44:06.359481 4914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:06 crc kubenswrapper[4914]: E0127 13:44:06.359569 4914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.369991 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c7c5128c8c5b576da9b36a45b8cd53496126f9dcf9be000efa3c7670fea08c13"} Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.370060 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:06 crc kubenswrapper[4914]: W0127 13:44:06.370250 4914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:06 crc kubenswrapper[4914]: E0127 13:44:06.370323 4914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.371326 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.371371 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.371382 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.372461 4914 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881" exitCode=0 Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.372573 4914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.372613 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.372729 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881"} Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.372810 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.372919 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.373466 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.373492 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.373509 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.373917 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.373943 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.373954 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.373991 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.374010 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.374021 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:06 crc kubenswrapper[4914]: I0127 13:44:06.521342 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.235927 4914 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.237923 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:45:57.804601841 +0000 UTC Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.377265 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.379261 4914 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c7c5128c8c5b576da9b36a45b8cd53496126f9dcf9be000efa3c7670fea08c13" exitCode=255 Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.379331 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c7c5128c8c5b576da9b36a45b8cd53496126f9dcf9be000efa3c7670fea08c13"} Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.379444 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.380500 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.380532 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.380541 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.381346 4914 scope.go:117] "RemoveContainer" containerID="c7c5128c8c5b576da9b36a45b8cd53496126f9dcf9be000efa3c7670fea08c13" Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.386010 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd"} Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.386035 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a"} Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.386046 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30"} Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.406328 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.406504 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.407465 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.407503 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.407511 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:07 crc kubenswrapper[4914]: I0127 13:44:07.440184 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.134381 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.134548 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.135593 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.135621 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.135629 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.238483 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:06:02.597002221 +0000 UTC Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.357489 4914 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.391378 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.394256 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9"} Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.394422 4914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.394584 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.395622 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.395681 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.395699 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.398821 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778"} Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.398890 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a"} Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.398905 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.398968 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.399940 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.400049 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.400090 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.400183 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.400201 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.400144 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.431633 4914 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.675234 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.677201 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.677353 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.677442 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.677561 4914 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.942935 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:08 crc kubenswrapper[4914]: I0127 13:44:08.966657 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:09 crc kubenswrapper[4914]: I0127 13:44:09.238752 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 05:59:54.865545063 +0000 UTC Jan 27 13:44:09 crc kubenswrapper[4914]: I0127 13:44:09.401594 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:09 crc kubenswrapper[4914]: I0127 13:44:09.401632 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:09 crc kubenswrapper[4914]: I0127 13:44:09.401639 4914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 13:44:09 crc kubenswrapper[4914]: I0127 13:44:09.401816 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:09 crc kubenswrapper[4914]: I0127 13:44:09.403297 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:09 crc kubenswrapper[4914]: I0127 13:44:09.403334 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:09 crc kubenswrapper[4914]: I0127 13:44:09.403347 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:09 crc kubenswrapper[4914]: I0127 13:44:09.403365 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:09 crc kubenswrapper[4914]: I0127 13:44:09.403384 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:09 crc kubenswrapper[4914]: I0127 13:44:09.403394 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:09 crc kubenswrapper[4914]: I0127 13:44:09.403989 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:09 crc kubenswrapper[4914]: I0127 13:44:09.404133 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:09 crc kubenswrapper[4914]: I0127 13:44:09.404250 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:09 crc kubenswrapper[4914]: I0127 13:44:09.511366 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:10 crc kubenswrapper[4914]: I0127 13:44:10.231826 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:10 crc kubenswrapper[4914]: I0127 13:44:10.239230 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 07:30:13.539396931 +0000 UTC Jan 27 13:44:10 crc kubenswrapper[4914]: I0127 13:44:10.404650 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:10 crc kubenswrapper[4914]: I0127 13:44:10.404650 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:10 crc kubenswrapper[4914]: I0127 13:44:10.405774 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:10 crc kubenswrapper[4914]: I0127 13:44:10.405821 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:10 crc kubenswrapper[4914]: I0127 13:44:10.405875 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:10 crc kubenswrapper[4914]: I0127 13:44:10.406045 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:10 crc kubenswrapper[4914]: I0127 13:44:10.406124 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:10 crc kubenswrapper[4914]: I0127 13:44:10.406141 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:10 crc kubenswrapper[4914]: I0127 13:44:10.892803 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 13:44:10 crc kubenswrapper[4914]: I0127 13:44:10.893200 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:10 crc kubenswrapper[4914]: I0127 13:44:10.895315 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:10 crc kubenswrapper[4914]: I0127 13:44:10.895396 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:10 crc kubenswrapper[4914]: I0127 13:44:10.895417 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:11 crc kubenswrapper[4914]: I0127 13:44:11.240286 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:08:27.283641307 +0000 UTC Jan 27 13:44:11 crc kubenswrapper[4914]: I0127 13:44:11.406444 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:11 crc kubenswrapper[4914]: I0127 13:44:11.407446 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:11 crc kubenswrapper[4914]: I0127 13:44:11.407513 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:11 crc kubenswrapper[4914]: I0127 13:44:11.407532 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:11 crc kubenswrapper[4914]: I0127 13:44:11.571259 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:11 crc kubenswrapper[4914]: I0127 13:44:11.571466 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:11 crc kubenswrapper[4914]: I0127 13:44:11.572748 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:11 crc kubenswrapper[4914]: I0127 13:44:11.572823 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:11 crc kubenswrapper[4914]: I0127 13:44:11.572928 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:12 crc kubenswrapper[4914]: I0127 13:44:12.240628 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:01:44.407322823 +0000 UTC Jan 27 13:44:12 crc kubenswrapper[4914]: E0127 13:44:12.369510 4914 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 13:44:13 crc kubenswrapper[4914]: I0127 13:44:13.240897 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:52:54.888712203 +0000 UTC Jan 27 13:44:14 crc kubenswrapper[4914]: I0127 13:44:14.241459 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 01:26:56.038938784 +0000 UTC Jan 27 13:44:14 crc kubenswrapper[4914]: I0127 13:44:14.571756 4914 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 13:44:14 crc kubenswrapper[4914]: I0127 13:44:14.571884 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:44:15 crc kubenswrapper[4914]: I0127 13:44:15.241987 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 05:11:32.395016417 +0000 UTC Jan 27 13:44:15 crc kubenswrapper[4914]: I0127 13:44:15.259339 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 13:44:15 crc kubenswrapper[4914]: I0127 13:44:15.259641 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:15 crc kubenswrapper[4914]: I0127 13:44:15.261099 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:15 crc kubenswrapper[4914]: I0127 13:44:15.261164 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:15 crc kubenswrapper[4914]: I0127 13:44:15.261177 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:16 crc kubenswrapper[4914]: I0127 13:44:16.242405 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 06:39:27.259254612 +0000 UTC Jan 27 13:44:17 crc kubenswrapper[4914]: I0127 13:44:17.243193 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:18:08.320433676 +0000 UTC Jan 27 13:44:17 crc kubenswrapper[4914]: I0127 13:44:17.444290 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:17 crc kubenswrapper[4914]: I0127 13:44:17.444461 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:17 crc kubenswrapper[4914]: I0127 13:44:17.445678 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:17 crc kubenswrapper[4914]: I0127 13:44:17.445716 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:17 crc kubenswrapper[4914]: I0127 13:44:17.445726 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:18 crc kubenswrapper[4914]: I0127 13:44:18.235513 4914 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 27 13:44:18 crc kubenswrapper[4914]: I0127 13:44:18.243689 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 03:54:14.835123556 +0000 UTC Jan 27 13:44:18 crc kubenswrapper[4914]: E0127 13:44:18.434031 4914 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 13:44:18 crc kubenswrapper[4914]: E0127 13:44:18.442633 4914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Jan 27 13:44:18 crc kubenswrapper[4914]: E0127 13:44:18.678691 4914 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 27 13:44:18 crc kubenswrapper[4914]: I0127 13:44:18.738718 4914 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 13:44:18 crc kubenswrapper[4914]: I0127 13:44:18.738771 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 13:44:18 crc kubenswrapper[4914]: I0127 13:44:18.751770 4914 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 27 13:44:18 crc kubenswrapper[4914]: I0127 13:44:18.752099 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 13:44:19 crc kubenswrapper[4914]: I0127 13:44:19.244294 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:14:20.954390334 +0000 UTC Jan 27 13:44:20 crc kubenswrapper[4914]: I0127 13:44:20.236240 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:20 crc kubenswrapper[4914]: I0127 13:44:20.236424 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:20 crc kubenswrapper[4914]: I0127 13:44:20.237980 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:20 crc kubenswrapper[4914]: I0127 13:44:20.238029 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:20 crc kubenswrapper[4914]: I0127 13:44:20.238039 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:20 crc kubenswrapper[4914]: I0127 13:44:20.240592 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:20 crc kubenswrapper[4914]: I0127 13:44:20.244564 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 02:38:39.187967862 +0000 UTC Jan 27 13:44:20 crc kubenswrapper[4914]: I0127 13:44:20.429436 4914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 13:44:20 crc kubenswrapper[4914]: I0127 13:44:20.429497 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:20 crc kubenswrapper[4914]: I0127 13:44:20.431149 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:20 crc kubenswrapper[4914]: I0127 13:44:20.431207 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:20 crc kubenswrapper[4914]: I0127 13:44:20.431223 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:21 crc kubenswrapper[4914]: I0127 13:44:21.245315 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:01:11.011015247 +0000 UTC Jan 27 13:44:22 crc kubenswrapper[4914]: I0127 13:44:22.245707 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 10:03:08.843237258 +0000 UTC Jan 27 13:44:22 crc kubenswrapper[4914]: E0127 13:44:22.370411 4914 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.246738 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 01:44:04.233555995 +0000 UTC Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.733729 4914 trace.go:236] Trace[541254510]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 13:44:12.425) (total time: 11307ms): Jan 27 13:44:23 crc kubenswrapper[4914]: Trace[541254510]: ---"Objects listed" error: 11307ms (13:44:23.733) Jan 27 13:44:23 crc kubenswrapper[4914]: Trace[541254510]: [11.307912572s] [11.307912572s] END Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.733760 4914 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.735082 4914 trace.go:236] Trace[935336175]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 13:44:10.618) (total time: 13116ms): Jan 27 13:44:23 crc kubenswrapper[4914]: Trace[935336175]: ---"Objects listed" error: 13116ms (13:44:23.735) Jan 27 13:44:23 crc kubenswrapper[4914]: Trace[935336175]: [13.11629561s] [13.11629561s] END Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.735262 4914 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.736330 4914 trace.go:236] Trace[2102770975]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 13:44:10.461) (total time: 13274ms): Jan 27 13:44:23 crc kubenswrapper[4914]: Trace[2102770975]: ---"Objects listed" error: 13274ms (13:44:23.736) Jan 27 13:44:23 crc kubenswrapper[4914]: Trace[2102770975]: [13.2745534s] [13.2745534s] END Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.736352 4914 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.736657 4914 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.737122 4914 trace.go:236] Trace[1166240346]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 13:44:09.971) (total time: 13765ms): Jan 27 13:44:23 crc kubenswrapper[4914]: Trace[1166240346]: ---"Objects listed" error: 13765ms (13:44:23.736) Jan 27 13:44:23 crc kubenswrapper[4914]: Trace[1166240346]: [13.765783408s] [13.765783408s] END Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.737158 4914 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.783000 4914 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48210->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.783053 4914 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38310->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.783076 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48210->192.168.126.11:17697: read: connection reset by peer" Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.783121 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38310->192.168.126.11:17697: read: connection reset by peer" Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.783371 4914 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.783398 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.813451 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:23 crc kubenswrapper[4914]: I0127 13:44:23.822306 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.233532 4914 apiserver.go:52] "Watching apiserver" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.235911 4914 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.236324 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.236819 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.236862 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.236977 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.237030 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.237100 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.237080 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.237168 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.237659 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.237756 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.239565 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.239582 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.239612 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.239636 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.239677 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.239798 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.239822 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.239862 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.239862 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.246809 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 06:49:57.623818723 +0000 UTC Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.266193 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.278234 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.289489 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.299393 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.308942 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.336717 4914 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.336994 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.340253 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.340337 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.340368 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.340393 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.340416 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.340444 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.340474 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.340498 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.340520 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.340543 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.340565 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.340587 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.340849 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.340914 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.340941 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.340962 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.341006 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.341029 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.341060 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.341105 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.341122 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.341186 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.341250 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.341272 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.341299 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.343686 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.343904 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.344171 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.344292 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.344370 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.344502 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.344530 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.344746 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.345110 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.345241 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.348521 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.345265 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.345308 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.345406 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.344466 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.345676 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.345734 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.345775 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.348593 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.348646 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.348679 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.348709 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.348737 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.348765 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.348794 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.348823 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.348871 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.348902 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.349075 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.349154 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.349179 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.349202 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.349221 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.349242 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.349261 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.349819 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.349885 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.349916 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.345857 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.345972 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.347719 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.348956 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.349168 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.349915 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.349912 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.350162 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.350425 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.350445 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.350668 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.350668 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.350824 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.350938 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.351088 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.351339 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.351342 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.351378 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.351439 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.351496 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.351499 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.351539 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.351511 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.351570 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355535 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355586 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355572 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355615 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355694 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355708 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355746 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355774 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355797 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355823 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355866 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355869 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355890 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355917 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355940 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355944 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355967 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.355995 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356023 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356047 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356071 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356093 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356112 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356116 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356173 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356202 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356225 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356248 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356270 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356296 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356320 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356331 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356340 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356381 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356392 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356420 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356451 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356474 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356494 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356513 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356533 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356543 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356553 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356576 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356596 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356619 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356642 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356662 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356685 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356706 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356729 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356736 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356751 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356776 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356798 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356821 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356864 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356886 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356905 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356928 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356952 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356974 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356998 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357021 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357041 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357065 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357088 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357455 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357481 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357504 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357526 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357645 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357676 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357701 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357735 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357763 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357793 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357821 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357868 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357895 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357922 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357952 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357978 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358002 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358046 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358071 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358096 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358150 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358173 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358194 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358215 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358237 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358263 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358284 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358308 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358333 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358390 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358413 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358436 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358461 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358484 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358508 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358534 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358560 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358609 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358634 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358658 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358683 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358707 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358732 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358757 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358781 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358805 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359263 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359302 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359325 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359348 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359373 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359396 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359424 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359449 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359471 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359492 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359516 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359541 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359567 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359594 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359619 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359642 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359674 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359698 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359720 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359741 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359766 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359792 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359816 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.359857 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356748 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.356782 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357001 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357172 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.363133 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357215 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357221 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357359 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357378 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357409 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357547 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357572 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357620 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357668 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357746 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357789 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357848 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358048 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358217 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358225 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358226 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358274 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358401 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358457 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358597 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358801 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.358813 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.360261 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.360361 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.360553 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.360711 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.360911 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.360976 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.361061 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.361204 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.361216 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.361478 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.361849 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.363417 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.361945 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.362529 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.362549 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.362615 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.362863 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.363070 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.357187 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.363873 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.363902 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.363938 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.363972 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364000 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364029 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364058 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364056 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364094 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364119 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364145 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364158 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364169 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364230 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364260 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364281 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364312 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364340 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364450 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364475 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364501 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364519 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364521 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364547 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364562 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364649 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364680 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364702 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364742 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364822 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.365066 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.365177 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.365468 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.365582 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.365753 4914 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.365901 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.365983 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.366195 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.366205 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.366491 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.366497 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.366766 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.366884 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.366964 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.367042 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.367171 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.367326 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.367339 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.367508 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.367801 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.367741 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.368188 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.368181 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.368270 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.368487 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.368503 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.368783 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.369164 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.369564 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.370708 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.370898 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.370938 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.371108 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.371119 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.371274 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.371293 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.371480 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.371484 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.371637 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.371660 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.371984 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.372000 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.372112 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.372160 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.372301 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.372339 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.372523 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.372762 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.372818 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.373153 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.373427 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.364742 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.373495 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.373536 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.373568 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.373601 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.373629 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.373660 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.373684 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.373713 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.373736 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.373763 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.373787 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.373817 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.373860 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.373887 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374005 4914 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374022 4914 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374038 4914 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374049 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374061 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374074 4914 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374086 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374098 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374112 4914 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374123 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374136 4914 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374149 4914 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374161 4914 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374172 4914 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374184 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374196 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374208 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374190 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374219 4914 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374286 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374307 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374324 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374375 4914 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374392 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374406 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374420 4914 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374436 4914 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374448 4914 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374461 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374474 4914 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374488 4914 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374501 4914 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374513 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374526 4914 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374540 4914 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374555 4914 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374569 4914 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374581 4914 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374594 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374607 4914 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374620 4914 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374633 4914 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374646 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374661 4914 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374674 4914 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374687 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374701 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374714 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374727 4914 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374741 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374756 4914 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374774 4914 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374787 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374800 4914 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374814 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374846 4914 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374861 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374873 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374886 4914 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374899 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374912 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374924 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384049 4914 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384078 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384093 4914 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384105 4914 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384118 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384131 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384145 4914 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384159 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384185 4914 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384201 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384214 4914 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384227 4914 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384239 4914 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384252 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384264 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384276 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384289 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384302 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384330 4914 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384343 4914 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384355 4914 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384369 4914 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384385 4914 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384399 4914 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384411 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384424 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384440 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384452 4914 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384464 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384477 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384491 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384504 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384516 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384528 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384540 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384553 4914 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384565 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384578 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384593 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384605 4914 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384619 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384631 4914 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384644 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384656 4914 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384667 4914 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384679 4914 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384692 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384704 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384716 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384729 4914 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384745 4914 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384757 4914 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384768 4914 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384780 4914 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384791 4914 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384803 4914 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384815 4914 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384846 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384859 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384872 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384883 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384898 4914 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384910 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384923 4914 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384934 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384946 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384959 4914 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384971 4914 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384982 4914 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.384993 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385005 4914 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385030 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385043 4914 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385055 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385065 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385077 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385088 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385101 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385114 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385126 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385138 4914 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385150 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385163 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385176 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385189 4914 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385202 4914 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385213 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385226 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385239 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385251 4914 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385265 4914 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385279 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385291 4914 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385303 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.377338 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.377594 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.379008 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.379345 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.379694 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.379928 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.380272 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.380472 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.381950 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.382155 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385433 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.382618 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385167 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385745 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.385775 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.386026 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.386072 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.386266 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.386543 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.386604 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.386617 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:44:24.886563482 +0000 UTC m=+23.198913557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.374524 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.387468 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.376107 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.387791 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.387809 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.388046 4914 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.388164 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:24.888145851 +0000 UTC m=+23.200495936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.388202 4914 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.388304 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:24.888296376 +0000 UTC m=+23.200646461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.389848 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.390131 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.390169 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.391061 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.391170 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.391272 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.391572 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.393112 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.416099 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.416372 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.416389 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.416697 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.417273 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.432029 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.437765 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.437808 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.437825 4914 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.437940 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:24.937903528 +0000 UTC m=+23.250253613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.440446 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.441511 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.442261 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.442505 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.442528 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.442543 4914 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.442598 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:24.942579723 +0000 UTC m=+23.254930198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.447384 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.449556 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.449988 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.451628 4914 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9" exitCode=255 Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.451775 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9"} Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.451873 4914 scope.go:117] "RemoveContainer" containerID="c7c5128c8c5b576da9b36a45b8cd53496126f9dcf9be000efa3c7670fea08c13" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.457562 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.472962 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.473289 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.473611 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.482391 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486111 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486141 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486173 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486183 4914 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486192 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486201 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486211 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486220 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486228 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486237 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486245 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486254 4914 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486262 4914 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486270 4914 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486301 4914 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486309 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486318 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486328 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486337 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486345 4914 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486353 4914 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486364 4914 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486372 4914 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486381 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486389 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486398 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486407 4914 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486415 4914 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486424 4914 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486432 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486441 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486451 4914 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486460 4914 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486470 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486478 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486487 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486497 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486507 4914 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486515 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486524 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486533 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486589 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.486621 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.504333 4914 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.505822 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.517695 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.529302 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.540086 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.551095 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.553284 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.557687 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.565075 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:44:24 crc kubenswrapper[4914]: W0127 13:44:24.569434 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-52491e23ff5950e4da38f2e4f84bbcaad222ed43d4c73e66c222a22456b6f349 WatchSource:0}: Error finding container 52491e23ff5950e4da38f2e4f84bbcaad222ed43d4c73e66c222a22456b6f349: Status 404 returned error can't find the container with id 52491e23ff5950e4da38f2e4f84bbcaad222ed43d4c73e66c222a22456b6f349 Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.570594 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.587815 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: W0127 13:44:24.592775 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-6bdb0c764b80c63f800c456c81a2a92ecd376a90d2ce49b92d014f8a54a81bc8 WatchSource:0}: Error finding container 6bdb0c764b80c63f800c456c81a2a92ecd376a90d2ce49b92d014f8a54a81bc8: Status 404 returned error can't find the container with id 6bdb0c764b80c63f800c456c81a2a92ecd376a90d2ce49b92d014f8a54a81bc8 Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.601670 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.616354 4914 scope.go:117] "RemoveContainer" containerID="fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9" Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.616619 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.618378 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.890149 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.890305 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.890379 4914 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.890388 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:44:25.890354179 +0000 UTC m=+24.202704284 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.890447 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:25.890420251 +0000 UTC m=+24.202770396 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.890472 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.890571 4914 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.890614 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:25.890604816 +0000 UTC m=+24.202954901 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.991307 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:24 crc kubenswrapper[4914]: I0127 13:44:24.991382 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.991506 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.991523 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.991535 4914 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.991569 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.991612 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.991629 4914 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.991587 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:25.991573446 +0000 UTC m=+24.303923531 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:24 crc kubenswrapper[4914]: E0127 13:44:24.991727 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:25.99170046 +0000 UTC m=+24.304050565 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.079626 4914 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.081476 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.081560 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.081576 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.081643 4914 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.095123 4914 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.095386 4914 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.096256 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.096277 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.096285 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.096298 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.096307 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:25Z","lastTransitionTime":"2026-01-27T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:25 crc kubenswrapper[4914]: E0127 13:44:25.109071 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.112710 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.112738 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.112746 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.112760 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.112768 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:25Z","lastTransitionTime":"2026-01-27T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:25 crc kubenswrapper[4914]: E0127 13:44:25.125249 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.129503 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.129539 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.129549 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.129565 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.129574 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:25Z","lastTransitionTime":"2026-01-27T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:25 crc kubenswrapper[4914]: E0127 13:44:25.142516 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.148414 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.148444 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.148452 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.148465 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.148476 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:25Z","lastTransitionTime":"2026-01-27T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:25 crc kubenswrapper[4914]: E0127 13:44:25.159454 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.163280 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.163308 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.163319 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.163338 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.163353 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:25Z","lastTransitionTime":"2026-01-27T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:25 crc kubenswrapper[4914]: E0127 13:44:25.173381 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: E0127 13:44:25.173555 4914 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.176364 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.176403 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.176416 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.176438 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.176452 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:25Z","lastTransitionTime":"2026-01-27T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.247745 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:53:05.318626483 +0000 UTC Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.279503 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.279559 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.279573 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.279595 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.279609 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:25Z","lastTransitionTime":"2026-01-27T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.294111 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:25 crc kubenswrapper[4914]: E0127 13:44:25.294261 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.382074 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.382113 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.382123 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.382139 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.382151 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:25Z","lastTransitionTime":"2026-01-27T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.443194 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.457262 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7"} Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.457391 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"68d6289c1a94426d433643c3445182373d5c1d0975a78254ab1dcff9d3a43ce7"} Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.459150 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.459486 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.462093 4914 scope.go:117] "RemoveContainer" containerID="fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9" Jan 27 13:44:25 crc kubenswrapper[4914]: E0127 13:44:25.462311 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.463626 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6bdb0c764b80c63f800c456c81a2a92ecd376a90d2ce49b92d014f8a54a81bc8"} Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.465534 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074"} Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.465582 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba"} Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.465602 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"52491e23ff5950e4da38f2e4f84bbcaad222ed43d4c73e66c222a22456b6f349"} Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.470951 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.478762 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.484729 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.484803 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.484817 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.484868 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.484895 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:25Z","lastTransitionTime":"2026-01-27T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.500918 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.533353 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.555950 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.572017 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.584864 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.587879 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.587937 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.587950 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.587967 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.587978 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:25Z","lastTransitionTime":"2026-01-27T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.595779 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.607749 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c5128c8c5b576da9b36a45b8cd53496126f9dcf9be000efa3c7670fea08c13\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"message\\\":\\\"W0127 13:44:05.988385 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 13:44:05.989005 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769521445 cert, and key in /tmp/serving-cert-2222649912/serving-signer.crt, /tmp/serving-cert-2222649912/serving-signer.key\\\\nI0127 13:44:06.292193 1 observer_polling.go:159] Starting file observer\\\\nW0127 13:44:06.295488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 13:44:06.295701 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:06.296814 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2222649912/tls.crt::/tmp/serving-cert-2222649912/tls.key\\\\\\\"\\\\nF0127 13:44:06.504562 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.618679 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.627740 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.638368 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.657509 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.668790 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.683846 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.691201 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.691305 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.691320 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.691346 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.691360 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:25Z","lastTransitionTime":"2026-01-27T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.696725 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.709522 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.719411 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.760794 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-gnhrd"] Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.761166 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gnhrd" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.763727 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.764008 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.764202 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.764403 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qhdfz"] Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.764665 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-554jw"] Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.764847 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.765457 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-6b628"] Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.765611 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.765731 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.766977 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7m5xg"] Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.767663 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.771282 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.771566 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.771595 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.771633 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.774503 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.774941 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.775332 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.775591 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.775972 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.776110 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.776160 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.776289 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.776473 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.776580 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.776689 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.776921 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.777100 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.777300 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.780311 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.782653 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.793954 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.794003 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.794016 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.794035 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.794047 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:25Z","lastTransitionTime":"2026-01-27T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.798793 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9ll5\" (UniqueName: \"kubernetes.io/projected/c183ba27-856b-4b3e-a8e4-3a1ef30a891a-kube-api-access-r9ll5\") pod \"node-resolver-gnhrd\" (UID: \"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\") " pod="openshift-dns/node-resolver-gnhrd" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.798870 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c183ba27-856b-4b3e-a8e4-3a1ef30a891a-hosts-file\") pod \"node-resolver-gnhrd\" (UID: \"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\") " pod="openshift-dns/node-resolver-gnhrd" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.805079 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.815036 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.824201 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.834644 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.846214 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.854383 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.864090 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.879677 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.890548 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.897278 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.897370 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.897383 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.897402 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.897416 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:25Z","lastTransitionTime":"2026-01-27T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899464 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899543 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-log-socket\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899567 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-cni-netd\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899588 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1adce282-c454-4aa2-9cbe-356c7d371f98-ovn-node-metrics-cert\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899613 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-ovnkube-script-lib\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899637 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rbqn\" (UniqueName: \"kubernetes.io/projected/0669c8c6-fa51-4aab-bf05-50f96cd91035-kube-api-access-9rbqn\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899658 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-var-lib-cni-multus\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899682 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqbwv\" (UniqueName: \"kubernetes.io/projected/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-kube-api-access-rqbwv\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: E0127 13:44:25.899712 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:44:27.899684876 +0000 UTC m=+26.212035011 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899757 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-multus-socket-dir-parent\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899784 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-multus-daemon-config\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899803 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-cni-bin\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899818 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0669c8c6-fa51-4aab-bf05-50f96cd91035-cnibin\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899865 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-node-log\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899906 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-run-ovn-kubernetes\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899927 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0669c8c6-fa51-4aab-bf05-50f96cd91035-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899954 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz8dq\" (UniqueName: \"kubernetes.io/projected/bdf2dcff-9caa-45ba-98a8-0a00861bd11a-kube-api-access-pz8dq\") pod \"machine-config-daemon-qhdfz\" (UID: \"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\") " pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899974 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c183ba27-856b-4b3e-a8e4-3a1ef30a891a-hosts-file\") pod \"node-resolver-gnhrd\" (UID: \"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\") " pod="openshift-dns/node-resolver-gnhrd" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.899992 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-etc-openvswitch\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900007 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-multus-cni-dir\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900024 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-run-netns\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900039 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-run-multus-certs\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900058 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0669c8c6-fa51-4aab-bf05-50f96cd91035-os-release\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900072 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0669c8c6-fa51-4aab-bf05-50f96cd91035-tuning-conf-dir\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900088 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-cni-binary-copy\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900103 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-ovn\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900119 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-etc-kubernetes\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900138 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-ovnkube-config\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900153 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-system-cni-dir\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900167 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-multus-conf-dir\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900188 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-systemd-units\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900207 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-systemd\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900221 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-openvswitch\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900238 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpnbf\" (UniqueName: \"kubernetes.io/projected/1adce282-c454-4aa2-9cbe-356c7d371f98-kube-api-access-vpnbf\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900252 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-run-k8s-cni-cncf-io\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900272 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900289 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-run-netns\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900307 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0669c8c6-fa51-4aab-bf05-50f96cd91035-cni-binary-copy\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900325 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-var-lib-cni-bin\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900340 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-var-lib-kubelet\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900358 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900395 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9ll5\" (UniqueName: \"kubernetes.io/projected/c183ba27-856b-4b3e-a8e4-3a1ef30a891a-kube-api-access-r9ll5\") pod \"node-resolver-gnhrd\" (UID: \"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\") " pod="openshift-dns/node-resolver-gnhrd" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900411 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0669c8c6-fa51-4aab-bf05-50f96cd91035-system-cni-dir\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900443 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-hostroot\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900459 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bdf2dcff-9caa-45ba-98a8-0a00861bd11a-rootfs\") pod \"machine-config-daemon-qhdfz\" (UID: \"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\") " pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900475 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdf2dcff-9caa-45ba-98a8-0a00861bd11a-proxy-tls\") pod \"machine-config-daemon-qhdfz\" (UID: \"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\") " pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900508 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-kubelet\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900531 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-slash\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900546 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900578 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-env-overrides\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900594 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-cnibin\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900609 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-var-lib-openvswitch\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900623 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-os-release\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900637 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdf2dcff-9caa-45ba-98a8-0a00861bd11a-mcd-auth-proxy-config\") pod \"machine-config-daemon-qhdfz\" (UID: \"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\") " pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.900764 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c183ba27-856b-4b3e-a8e4-3a1ef30a891a-hosts-file\") pod \"node-resolver-gnhrd\" (UID: \"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\") " pod="openshift-dns/node-resolver-gnhrd" Jan 27 13:44:25 crc kubenswrapper[4914]: E0127 13:44:25.900950 4914 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:44:25 crc kubenswrapper[4914]: E0127 13:44:25.900991 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:27.900974686 +0000 UTC m=+26.213324761 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:44:25 crc kubenswrapper[4914]: E0127 13:44:25.901303 4914 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:44:25 crc kubenswrapper[4914]: E0127 13:44:25.901348 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:27.901338117 +0000 UTC m=+26.213688262 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.903131 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.920102 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9ll5\" (UniqueName: \"kubernetes.io/projected/c183ba27-856b-4b3e-a8e4-3a1ef30a891a-kube-api-access-r9ll5\") pod \"node-resolver-gnhrd\" (UID: \"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\") " pod="openshift-dns/node-resolver-gnhrd" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.925363 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.935732 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.945818 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.953095 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.961192 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.969536 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.975327 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.986890 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.999664 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.999719 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.999731 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.999749 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:25 crc kubenswrapper[4914]: I0127 13:44:25.999760 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:25Z","lastTransitionTime":"2026-01-27T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003584 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003619 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0669c8c6-fa51-4aab-bf05-50f96cd91035-system-cni-dir\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003636 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-hostroot\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003654 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bdf2dcff-9caa-45ba-98a8-0a00861bd11a-rootfs\") pod \"machine-config-daemon-qhdfz\" (UID: \"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\") " pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003668 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdf2dcff-9caa-45ba-98a8-0a00861bd11a-proxy-tls\") pod \"machine-config-daemon-qhdfz\" (UID: \"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\") " pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003683 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-kubelet\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003697 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-slash\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003744 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003762 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-env-overrides\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: E0127 13:44:26.003768 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:44:26 crc kubenswrapper[4914]: E0127 13:44:26.003795 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:44:26 crc kubenswrapper[4914]: E0127 13:44:26.003805 4914 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003823 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-cnibin\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003777 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-cnibin\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: E0127 13:44:26.003863 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:28.003847714 +0000 UTC m=+26.316197789 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003878 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-kubelet\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003900 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-slash\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003915 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-hostroot\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003917 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0669c8c6-fa51-4aab-bf05-50f96cd91035-system-cni-dir\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003944 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003963 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-var-lib-openvswitch\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003943 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bdf2dcff-9caa-45ba-98a8-0a00861bd11a-rootfs\") pod \"machine-config-daemon-qhdfz\" (UID: \"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\") " pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.003986 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-os-release\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.004008 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-var-lib-openvswitch\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.004045 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-os-release\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.004374 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-env-overrides\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.004349 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.004554 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdf2dcff-9caa-45ba-98a8-0a00861bd11a-mcd-auth-proxy-config\") pod \"machine-config-daemon-qhdfz\" (UID: \"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\") " pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.004589 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdf2dcff-9caa-45ba-98a8-0a00861bd11a-mcd-auth-proxy-config\") pod \"machine-config-daemon-qhdfz\" (UID: \"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\") " pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.004621 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-log-socket\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.004636 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-cni-netd\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.004677 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1adce282-c454-4aa2-9cbe-356c7d371f98-ovn-node-metrics-cert\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.004693 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-ovnkube-script-lib\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.004720 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-log-socket\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.004735 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-cni-netd\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.004772 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rbqn\" (UniqueName: \"kubernetes.io/projected/0669c8c6-fa51-4aab-bf05-50f96cd91035-kube-api-access-9rbqn\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.004788 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-var-lib-cni-multus\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.004894 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-var-lib-cni-multus\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.004806 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqbwv\" (UniqueName: \"kubernetes.io/projected/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-kube-api-access-rqbwv\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005068 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-multus-socket-dir-parent\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005087 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-multus-daemon-config\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005104 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-node-log\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005285 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-cni-bin\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005299 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0669c8c6-fa51-4aab-bf05-50f96cd91035-cnibin\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005249 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-multus-socket-dir-parent\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005369 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0669c8c6-fa51-4aab-bf05-50f96cd91035-cnibin\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005351 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-cni-bin\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005179 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-node-log\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005316 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005423 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-run-ovn-kubernetes\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005445 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0669c8c6-fa51-4aab-bf05-50f96cd91035-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005502 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz8dq\" (UniqueName: \"kubernetes.io/projected/bdf2dcff-9caa-45ba-98a8-0a00861bd11a-kube-api-access-pz8dq\") pod \"machine-config-daemon-qhdfz\" (UID: \"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\") " pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005521 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-etc-openvswitch\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005537 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-multus-cni-dir\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005557 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-run-netns\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005592 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-run-multus-certs\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005457 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-ovnkube-script-lib\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: E0127 13:44:26.005520 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:44:26 crc kubenswrapper[4914]: E0127 13:44:26.005664 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:44:26 crc kubenswrapper[4914]: E0127 13:44:26.005677 4914 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005672 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-etc-openvswitch\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005481 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-run-ovn-kubernetes\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005707 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-run-netns\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005739 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0669c8c6-fa51-4aab-bf05-50f96cd91035-os-release\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:26 crc kubenswrapper[4914]: E0127 13:44:26.005760 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:28.005742093 +0000 UTC m=+26.318092278 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005759 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-multus-daemon-config\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005792 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0669c8c6-fa51-4aab-bf05-50f96cd91035-tuning-conf-dir\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005802 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0669c8c6-fa51-4aab-bf05-50f96cd91035-os-release\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005786 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-run-multus-certs\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005825 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-multus-cni-dir\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005818 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-cni-binary-copy\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005899 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-ovn\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005918 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-etc-kubernetes\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005936 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-ovnkube-config\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005941 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-ovn\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005950 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-system-cni-dir\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005967 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-multus-conf-dir\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005976 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-etc-kubernetes\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.005986 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-systemd-units\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006003 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-systemd\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006018 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-system-cni-dir\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006020 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-openvswitch\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006042 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-openvswitch\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006052 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpnbf\" (UniqueName: \"kubernetes.io/projected/1adce282-c454-4aa2-9cbe-356c7d371f98-kube-api-access-vpnbf\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006066 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-multus-conf-dir\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006072 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-run-k8s-cni-cncf-io\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006090 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-systemd-units\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006113 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-run-k8s-cni-cncf-io\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006114 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-run-netns\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006130 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-run-netns\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006143 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0669c8c6-fa51-4aab-bf05-50f96cd91035-cni-binary-copy\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006154 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-systemd\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006162 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-var-lib-cni-bin\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006181 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-var-lib-kubelet\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006233 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-var-lib-kubelet\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006390 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-cni-binary-copy\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006460 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-host-var-lib-cni-bin\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006745 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0669c8c6-fa51-4aab-bf05-50f96cd91035-cni-binary-copy\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.006901 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-ovnkube-config\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.007110 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0669c8c6-fa51-4aab-bf05-50f96cd91035-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.007725 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0669c8c6-fa51-4aab-bf05-50f96cd91035-tuning-conf-dir\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.018420 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdf2dcff-9caa-45ba-98a8-0a00861bd11a-proxy-tls\") pod \"machine-config-daemon-qhdfz\" (UID: \"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\") " pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.019041 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1adce282-c454-4aa2-9cbe-356c7d371f98-ovn-node-metrics-cert\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.023232 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.023405 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz8dq\" (UniqueName: \"kubernetes.io/projected/bdf2dcff-9caa-45ba-98a8-0a00861bd11a-kube-api-access-pz8dq\") pod \"machine-config-daemon-qhdfz\" (UID: \"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\") " pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.023865 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqbwv\" (UniqueName: \"kubernetes.io/projected/38170a87-0bc0-4c7d-b7a0-45b86a1f79e3-kube-api-access-rqbwv\") pod \"multus-6b628\" (UID: \"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\") " pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.024204 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rbqn\" (UniqueName: \"kubernetes.io/projected/0669c8c6-fa51-4aab-bf05-50f96cd91035-kube-api-access-9rbqn\") pod \"multus-additional-cni-plugins-554jw\" (UID: \"0669c8c6-fa51-4aab-bf05-50f96cd91035\") " pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.024885 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpnbf\" (UniqueName: \"kubernetes.io/projected/1adce282-c454-4aa2-9cbe-356c7d371f98-kube-api-access-vpnbf\") pod \"ovnkube-node-7m5xg\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.035221 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.047179 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.070408 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.076602 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gnhrd" Jan 27 13:44:26 crc kubenswrapper[4914]: W0127 13:44:26.087005 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc183ba27_856b_4b3e_a8e4_3a1ef30a891a.slice/crio-90d1c4a3a2fcc62354dc6615247237f71390b08a1517c33c3ee6f182cb9b2e32 WatchSource:0}: Error finding container 90d1c4a3a2fcc62354dc6615247237f71390b08a1517c33c3ee6f182cb9b2e32: Status 404 returned error can't find the container with id 90d1c4a3a2fcc62354dc6615247237f71390b08a1517c33c3ee6f182cb9b2e32 Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.095001 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.103389 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.103552 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.103697 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.103732 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.103746 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:26Z","lastTransitionTime":"2026-01-27T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.106052 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-554jw" Jan 27 13:44:26 crc kubenswrapper[4914]: W0127 13:44:26.111103 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdf2dcff_9caa_45ba_98a8_0a00861bd11a.slice/crio-92ca549bf2c20c5c3a1d05971a6e9a65e7c264eca708f400561bee9343b7b0de WatchSource:0}: Error finding container 92ca549bf2c20c5c3a1d05971a6e9a65e7c264eca708f400561bee9343b7b0de: Status 404 returned error can't find the container with id 92ca549bf2c20c5c3a1d05971a6e9a65e7c264eca708f400561bee9343b7b0de Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.112262 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6b628" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.118384 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:26 crc kubenswrapper[4914]: W0127 13:44:26.135214 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38170a87_0bc0_4c7d_b7a0_45b86a1f79e3.slice/crio-34441519deb2e7cccb8e2ccc290b75a3e190177cbd10b264b710a6b2b292833a WatchSource:0}: Error finding container 34441519deb2e7cccb8e2ccc290b75a3e190177cbd10b264b710a6b2b292833a: Status 404 returned error can't find the container with id 34441519deb2e7cccb8e2ccc290b75a3e190177cbd10b264b710a6b2b292833a Jan 27 13:44:26 crc kubenswrapper[4914]: W0127 13:44:26.145012 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0669c8c6_fa51_4aab_bf05_50f96cd91035.slice/crio-383c2fe6f57930623eace952cdbace5fdb75ea388956fcc4ced6089e7b024c68 WatchSource:0}: Error finding container 383c2fe6f57930623eace952cdbace5fdb75ea388956fcc4ced6089e7b024c68: Status 404 returned error can't find the container with id 383c2fe6f57930623eace952cdbace5fdb75ea388956fcc4ced6089e7b024c68 Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.207438 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.207485 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.207496 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.207509 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.207519 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:26Z","lastTransitionTime":"2026-01-27T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.249185 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 16:29:52.103861976 +0000 UTC Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.293720 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:26 crc kubenswrapper[4914]: E0127 13:44:26.293865 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.294075 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:26 crc kubenswrapper[4914]: E0127 13:44:26.294181 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.300749 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.302664 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.304197 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.305602 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.306468 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.307708 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.308488 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.309242 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.310510 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.310538 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.310548 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.310550 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.310564 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.310578 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:26Z","lastTransitionTime":"2026-01-27T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.311191 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.312224 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.312973 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.314346 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.315237 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.316569 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.317243 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.317984 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.318992 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.319662 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.320389 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.321492 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.322214 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.323396 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.324320 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.324876 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.326073 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.327267 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.327949 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.328708 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.329708 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.330309 4914 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.330436 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.332492 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.333091 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.333827 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.335813 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.336997 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.337702 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.339016 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.339846 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.340888 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.341649 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.343014 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.344272 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.344823 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.345753 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.346335 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.347543 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.348140 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.348642 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.349513 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.350089 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.351110 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.351681 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.412532 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.412571 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.412583 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.412599 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.412612 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:26Z","lastTransitionTime":"2026-01-27T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.469862 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gnhrd" event={"ID":"c183ba27-856b-4b3e-a8e4-3a1ef30a891a","Type":"ContainerStarted","Data":"90d1c4a3a2fcc62354dc6615247237f71390b08a1517c33c3ee6f182cb9b2e32"} Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.470700 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerStarted","Data":"44a393d239c0a7f128ed5c5ba1ba6dd4a5a5d354c49c94f07c6bb9b4c9afff82"} Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.471470 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6b628" event={"ID":"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3","Type":"ContainerStarted","Data":"34441519deb2e7cccb8e2ccc290b75a3e190177cbd10b264b710a6b2b292833a"} Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.472195 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" event={"ID":"0669c8c6-fa51-4aab-bf05-50f96cd91035","Type":"ContainerStarted","Data":"383c2fe6f57930623eace952cdbace5fdb75ea388956fcc4ced6089e7b024c68"} Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.474177 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerStarted","Data":"92ca549bf2c20c5c3a1d05971a6e9a65e7c264eca708f400561bee9343b7b0de"} Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.515393 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.515437 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.515447 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.515464 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.515474 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:26Z","lastTransitionTime":"2026-01-27T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.618820 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.618884 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.618897 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.618914 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.618926 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:26Z","lastTransitionTime":"2026-01-27T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.721305 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.721339 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.721350 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.721365 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.721376 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:26Z","lastTransitionTime":"2026-01-27T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.824070 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.824436 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.824450 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.824465 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.824842 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:26Z","lastTransitionTime":"2026-01-27T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.928086 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.928140 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.928150 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.928168 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:26 crc kubenswrapper[4914]: I0127 13:44:26.928179 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:26Z","lastTransitionTime":"2026-01-27T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.030607 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.030647 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.030658 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.030674 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.030685 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:27Z","lastTransitionTime":"2026-01-27T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.052017 4914 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.062779 4914 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.081927 4914 csr.go:261] certificate signing request csr-m54mw is approved, waiting to be issued Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.133381 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.133423 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.133435 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.133452 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.133464 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:27Z","lastTransitionTime":"2026-01-27T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.154326 4914 csr.go:257] certificate signing request csr-m54mw is issued Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.235900 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.235945 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.235956 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.235971 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.235982 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:27Z","lastTransitionTime":"2026-01-27T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.249568 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:44:38.647136562 +0000 UTC Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.293687 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:27 crc kubenswrapper[4914]: E0127 13:44:27.293860 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.338391 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.338454 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.338469 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.338788 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.339043 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:27Z","lastTransitionTime":"2026-01-27T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.449671 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.449733 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.449749 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.449769 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.449783 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:27Z","lastTransitionTime":"2026-01-27T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.479075 4914 generic.go:334] "Generic (PLEG): container finished" podID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerID="10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5" exitCode=0 Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.479163 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerDied","Data":"10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.481007 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6b628" event={"ID":"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3","Type":"ContainerStarted","Data":"d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.483019 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" event={"ID":"0669c8c6-fa51-4aab-bf05-50f96cd91035","Type":"ContainerStarted","Data":"c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.487259 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gnhrd" event={"ID":"c183ba27-856b-4b3e-a8e4-3a1ef30a891a","Type":"ContainerStarted","Data":"db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.489965 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerStarted","Data":"8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.490030 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerStarted","Data":"66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.492464 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.498580 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.511645 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.527990 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.543726 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.552507 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.552553 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.552565 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.552586 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.552599 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:27Z","lastTransitionTime":"2026-01-27T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.565001 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.576187 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.590288 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.601486 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.614767 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.632010 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.653229 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.654972 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.655004 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.655014 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.655029 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.655038 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:27Z","lastTransitionTime":"2026-01-27T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.669566 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.683115 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.704492 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.730631 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.743464 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.754748 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.757186 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.757211 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.757220 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.757231 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.757239 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:27Z","lastTransitionTime":"2026-01-27T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.765490 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.775969 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.787371 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.797298 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.810747 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.822728 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.838869 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.850460 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.860244 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.860377 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.860435 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.860500 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.860555 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:27Z","lastTransitionTime":"2026-01-27T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.862015 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.873711 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.886344 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.924993 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.925148 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.925174 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:27 crc kubenswrapper[4914]: E0127 13:44:27.925342 4914 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:44:27 crc kubenswrapper[4914]: E0127 13:44:27.925417 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:31.925395868 +0000 UTC m=+30.237745953 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:44:27 crc kubenswrapper[4914]: E0127 13:44:27.925433 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:44:31.925427659 +0000 UTC m=+30.237777744 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:44:27 crc kubenswrapper[4914]: E0127 13:44:27.925470 4914 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:44:27 crc kubenswrapper[4914]: E0127 13:44:27.925506 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:31.925496381 +0000 UTC m=+30.237846466 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.962466 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.962505 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.962515 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.962530 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:27 crc kubenswrapper[4914]: I0127 13:44:27.962541 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:27Z","lastTransitionTime":"2026-01-27T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.026460 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.026516 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:28 crc kubenswrapper[4914]: E0127 13:44:28.026642 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:44:28 crc kubenswrapper[4914]: E0127 13:44:28.026661 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:44:28 crc kubenswrapper[4914]: E0127 13:44:28.026673 4914 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:28 crc kubenswrapper[4914]: E0127 13:44:28.026690 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:44:28 crc kubenswrapper[4914]: E0127 13:44:28.026727 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:32.026710668 +0000 UTC m=+30.339060753 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:28 crc kubenswrapper[4914]: E0127 13:44:28.026732 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:44:28 crc kubenswrapper[4914]: E0127 13:44:28.026750 4914 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:28 crc kubenswrapper[4914]: E0127 13:44:28.026808 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:32.026789602 +0000 UTC m=+30.339139767 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.070509 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.070554 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.070567 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.070584 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.070595 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:28Z","lastTransitionTime":"2026-01-27T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.156151 4914 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 13:39:27 +0000 UTC, rotation deadline is 2026-10-17 02:27:24.802969652 +0000 UTC Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.156210 4914 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6300h42m56.646763832s for next certificate rotation Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.173794 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.173843 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.173855 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.173872 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.173884 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:28Z","lastTransitionTime":"2026-01-27T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.250057 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 20:42:12.597486282 +0000 UTC Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.276039 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.276083 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.276093 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.276114 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.276125 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:28Z","lastTransitionTime":"2026-01-27T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.293551 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:28 crc kubenswrapper[4914]: E0127 13:44:28.293675 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.293746 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:28 crc kubenswrapper[4914]: E0127 13:44:28.293903 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.357917 4914 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.358535 4914 scope.go:117] "RemoveContainer" containerID="fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9" Jan 27 13:44:28 crc kubenswrapper[4914]: E0127 13:44:28.358695 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.378151 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.378190 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.378201 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.378219 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.378231 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:28Z","lastTransitionTime":"2026-01-27T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.480817 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.480881 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.480891 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.480904 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.480914 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:28Z","lastTransitionTime":"2026-01-27T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.497125 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerStarted","Data":"60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9"} Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.497175 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerStarted","Data":"4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b"} Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.497190 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerStarted","Data":"4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258"} Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.498314 4914 generic.go:334] "Generic (PLEG): container finished" podID="0669c8c6-fa51-4aab-bf05-50f96cd91035" containerID="c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd" exitCode=0 Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.498430 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" event={"ID":"0669c8c6-fa51-4aab-bf05-50f96cd91035","Type":"ContainerDied","Data":"c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd"} Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.511296 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.520089 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.533871 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.548319 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.583668 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.583706 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.583718 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.583735 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.583748 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:28Z","lastTransitionTime":"2026-01-27T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.609445 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.642716 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.672148 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.683085 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.686336 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.686392 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.686404 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.686421 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.686431 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:28Z","lastTransitionTime":"2026-01-27T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.696783 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.720914 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.738082 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.751106 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.764428 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.776576 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.789546 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.789594 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.789602 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.789618 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.789627 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:28Z","lastTransitionTime":"2026-01-27T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.892681 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.892717 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.892729 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.892746 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.892758 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:28Z","lastTransitionTime":"2026-01-27T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.994982 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.995070 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.995084 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.995110 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:28 crc kubenswrapper[4914]: I0127 13:44:28.995142 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:28Z","lastTransitionTime":"2026-01-27T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.097371 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.097408 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.097417 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.097432 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.097441 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:29Z","lastTransitionTime":"2026-01-27T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.199753 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.199792 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.199807 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.199845 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.199859 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:29Z","lastTransitionTime":"2026-01-27T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.250890 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 00:56:53.768238643 +0000 UTC Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.294267 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:29 crc kubenswrapper[4914]: E0127 13:44:29.294411 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.302059 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.302090 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.302101 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.302115 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.302127 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:29Z","lastTransitionTime":"2026-01-27T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.404550 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.404591 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.404603 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.404617 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.404626 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:29Z","lastTransitionTime":"2026-01-27T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.505951 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.505967 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerStarted","Data":"009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d"} Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.506024 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerStarted","Data":"c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d"} Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.506001 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.506046 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.506059 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.506069 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:29Z","lastTransitionTime":"2026-01-27T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.507700 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" event={"ID":"0669c8c6-fa51-4aab-bf05-50f96cd91035","Type":"ContainerStarted","Data":"a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094"} Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.525192 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:29Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.537102 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:29Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.552130 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:29Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.568793 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:29Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.586810 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:29Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.598999 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:29Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.608206 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.608266 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.608277 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.608297 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.608310 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:29Z","lastTransitionTime":"2026-01-27T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.611295 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:29Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.628890 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:29Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.641238 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:29Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.664663 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:29Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.678030 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:29Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.691276 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:29Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.703293 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:29Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.710738 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.710772 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.710782 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.710797 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.710806 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:29Z","lastTransitionTime":"2026-01-27T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.716918 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:29Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.814324 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.814372 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.814384 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.814401 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.814414 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:29Z","lastTransitionTime":"2026-01-27T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.916986 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.917052 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.917062 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.917079 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:29 crc kubenswrapper[4914]: I0127 13:44:29.917098 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:29Z","lastTransitionTime":"2026-01-27T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.018923 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.019003 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.019017 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.019042 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.019056 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:30Z","lastTransitionTime":"2026-01-27T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.121670 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.121708 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.121718 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.121733 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.121759 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:30Z","lastTransitionTime":"2026-01-27T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.224221 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.224280 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.224291 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.224348 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.224361 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:30Z","lastTransitionTime":"2026-01-27T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.251798 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 01:45:23.648540061 +0000 UTC Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.293438 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.293526 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:30 crc kubenswrapper[4914]: E0127 13:44:30.293632 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:30 crc kubenswrapper[4914]: E0127 13:44:30.293703 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.328259 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.328297 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.328308 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.328324 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.328335 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:30Z","lastTransitionTime":"2026-01-27T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.430711 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.430754 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.430764 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.430780 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.430792 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:30Z","lastTransitionTime":"2026-01-27T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.515728 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerStarted","Data":"4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef"} Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.535943 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.535987 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.535999 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.536016 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.536027 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:30Z","lastTransitionTime":"2026-01-27T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.639322 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.639368 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.639378 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.639432 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.639453 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:30Z","lastTransitionTime":"2026-01-27T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.741921 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.741967 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.741975 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.741991 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.742000 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:30Z","lastTransitionTime":"2026-01-27T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.844457 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.844500 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.844511 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.844530 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.844544 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:30Z","lastTransitionTime":"2026-01-27T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.948185 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.948228 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.948241 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.948265 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:30 crc kubenswrapper[4914]: I0127 13:44:30.948276 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:30Z","lastTransitionTime":"2026-01-27T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.050934 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.050958 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.050966 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.050979 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.050987 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:31Z","lastTransitionTime":"2026-01-27T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.153167 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.153214 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.153227 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.153245 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.153258 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:31Z","lastTransitionTime":"2026-01-27T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.252308 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:34:38.815712616 +0000 UTC Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.255491 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.255529 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.255541 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.255562 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.255574 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:31Z","lastTransitionTime":"2026-01-27T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.293987 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:31 crc kubenswrapper[4914]: E0127 13:44:31.294490 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.358672 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.358708 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.358719 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.358737 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.358747 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:31Z","lastTransitionTime":"2026-01-27T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.460794 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.460856 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.460868 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.460888 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.460902 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:31Z","lastTransitionTime":"2026-01-27T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.520296 4914 generic.go:334] "Generic (PLEG): container finished" podID="0669c8c6-fa51-4aab-bf05-50f96cd91035" containerID="a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094" exitCode=0 Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.520337 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" event={"ID":"0669c8c6-fa51-4aab-bf05-50f96cd91035","Type":"ContainerDied","Data":"a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094"} Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.534138 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.545564 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.563528 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.563582 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.563595 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.563615 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.563629 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:31Z","lastTransitionTime":"2026-01-27T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.564937 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.578466 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.617182 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.634473 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.650270 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.663912 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.666322 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.666375 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.666387 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.666404 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.666418 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:31Z","lastTransitionTime":"2026-01-27T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.678476 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.698371 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.716688 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.730957 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.743908 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.768892 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.768939 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.768954 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.768972 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.768984 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:31Z","lastTransitionTime":"2026-01-27T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.778403 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.871614 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.871652 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.871660 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.871675 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.871685 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:31Z","lastTransitionTime":"2026-01-27T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.973241 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.973275 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.973283 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.973296 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:31 crc kubenswrapper[4914]: I0127 13:44:31.973310 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:31Z","lastTransitionTime":"2026-01-27T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.018944 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.019055 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.019081 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:32 crc kubenswrapper[4914]: E0127 13:44:32.019229 4914 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:44:32 crc kubenswrapper[4914]: E0127 13:44:32.019301 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:40.019283816 +0000 UTC m=+38.331633901 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:44:32 crc kubenswrapper[4914]: E0127 13:44:32.019324 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:44:40.019313777 +0000 UTC m=+38.331663862 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:44:32 crc kubenswrapper[4914]: E0127 13:44:32.019315 4914 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:44:32 crc kubenswrapper[4914]: E0127 13:44:32.019551 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:40.019515593 +0000 UTC m=+38.331865838 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.076005 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.076048 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.076056 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.076070 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.076079 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:32Z","lastTransitionTime":"2026-01-27T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.104231 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-5vprj"] Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.104616 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5vprj" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.106974 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.107168 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.107302 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.107288 4914 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.109668 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.120327 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.120391 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:32 crc kubenswrapper[4914]: E0127 13:44:32.120502 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:44:32 crc kubenswrapper[4914]: E0127 13:44:32.120522 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:44:32 crc kubenswrapper[4914]: E0127 13:44:32.120534 4914 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:32 crc kubenswrapper[4914]: E0127 13:44:32.120575 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:40.120562905 +0000 UTC m=+38.432912990 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:32 crc kubenswrapper[4914]: E0127 13:44:32.120926 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:44:32 crc kubenswrapper[4914]: E0127 13:44:32.120944 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:44:32 crc kubenswrapper[4914]: E0127 13:44:32.120951 4914 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:32 crc kubenswrapper[4914]: E0127 13:44:32.120975 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:40.120968157 +0000 UTC m=+38.433318232 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.178337 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.178388 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.178398 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.178420 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.178434 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:32Z","lastTransitionTime":"2026-01-27T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.221658 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afff8b35-f3f4-4f13-a19d-cb318f982fbc-host\") pod \"node-ca-5vprj\" (UID: \"afff8b35-f3f4-4f13-a19d-cb318f982fbc\") " pod="openshift-image-registry/node-ca-5vprj" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.221737 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/afff8b35-f3f4-4f13-a19d-cb318f982fbc-serviceca\") pod \"node-ca-5vprj\" (UID: \"afff8b35-f3f4-4f13-a19d-cb318f982fbc\") " pod="openshift-image-registry/node-ca-5vprj" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.221755 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rspcf\" (UniqueName: \"kubernetes.io/projected/afff8b35-f3f4-4f13-a19d-cb318f982fbc-kube-api-access-rspcf\") pod \"node-ca-5vprj\" (UID: \"afff8b35-f3f4-4f13-a19d-cb318f982fbc\") " pod="openshift-image-registry/node-ca-5vprj" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.253323 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:12:22.87786825 +0000 UTC Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.285688 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.285718 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.285727 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.285742 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.285751 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:32Z","lastTransitionTime":"2026-01-27T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.293412 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:32 crc kubenswrapper[4914]: E0127 13:44:32.293542 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.294004 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:32 crc kubenswrapper[4914]: E0127 13:44:32.294078 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.322734 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afff8b35-f3f4-4f13-a19d-cb318f982fbc-host\") pod \"node-ca-5vprj\" (UID: \"afff8b35-f3f4-4f13-a19d-cb318f982fbc\") " pod="openshift-image-registry/node-ca-5vprj" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.322815 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/afff8b35-f3f4-4f13-a19d-cb318f982fbc-serviceca\") pod \"node-ca-5vprj\" (UID: \"afff8b35-f3f4-4f13-a19d-cb318f982fbc\") " pod="openshift-image-registry/node-ca-5vprj" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.322862 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rspcf\" (UniqueName: \"kubernetes.io/projected/afff8b35-f3f4-4f13-a19d-cb318f982fbc-kube-api-access-rspcf\") pod \"node-ca-5vprj\" (UID: \"afff8b35-f3f4-4f13-a19d-cb318f982fbc\") " pod="openshift-image-registry/node-ca-5vprj" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.322886 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afff8b35-f3f4-4f13-a19d-cb318f982fbc-host\") pod \"node-ca-5vprj\" (UID: \"afff8b35-f3f4-4f13-a19d-cb318f982fbc\") " pod="openshift-image-registry/node-ca-5vprj" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.344384 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rspcf\" (UniqueName: \"kubernetes.io/projected/afff8b35-f3f4-4f13-a19d-cb318f982fbc-kube-api-access-rspcf\") pod \"node-ca-5vprj\" (UID: \"afff8b35-f3f4-4f13-a19d-cb318f982fbc\") " pod="openshift-image-registry/node-ca-5vprj" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.387346 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.387395 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.387405 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.387421 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.387432 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:32Z","lastTransitionTime":"2026-01-27T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.393330 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/afff8b35-f3f4-4f13-a19d-cb318f982fbc-serviceca\") pod \"node-ca-5vprj\" (UID: \"afff8b35-f3f4-4f13-a19d-cb318f982fbc\") " pod="openshift-image-registry/node-ca-5vprj" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.432926 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5vprj" Jan 27 13:44:32 crc kubenswrapper[4914]: W0127 13:44:32.448920 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafff8b35_f3f4_4f13_a19d_cb318f982fbc.slice/crio-9a639e047f29c838e1b7ff197c295ae77cd223a6cdb41c9528a86348abdfb691 WatchSource:0}: Error finding container 9a639e047f29c838e1b7ff197c295ae77cd223a6cdb41c9528a86348abdfb691: Status 404 returned error can't find the container with id 9a639e047f29c838e1b7ff197c295ae77cd223a6cdb41c9528a86348abdfb691 Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.489959 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.490003 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.490013 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.490026 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.490036 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:32Z","lastTransitionTime":"2026-01-27T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.524414 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5vprj" event={"ID":"afff8b35-f3f4-4f13-a19d-cb318f982fbc","Type":"ContainerStarted","Data":"9a639e047f29c838e1b7ff197c295ae77cd223a6cdb41c9528a86348abdfb691"} Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.526606 4914 generic.go:334] "Generic (PLEG): container finished" podID="0669c8c6-fa51-4aab-bf05-50f96cd91035" containerID="b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96" exitCode=0 Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.526635 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" event={"ID":"0669c8c6-fa51-4aab-bf05-50f96cd91035","Type":"ContainerDied","Data":"b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96"} Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.593026 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.593077 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.593087 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.593103 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.593114 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:32Z","lastTransitionTime":"2026-01-27T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.694677 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.694711 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.694720 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.694737 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.694747 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:32Z","lastTransitionTime":"2026-01-27T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.798177 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.798507 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.798520 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.798535 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.798545 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:32Z","lastTransitionTime":"2026-01-27T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.900587 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.900627 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.900635 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.900650 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:32 crc kubenswrapper[4914]: I0127 13:44:32.900658 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:32Z","lastTransitionTime":"2026-01-27T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.002899 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.002947 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.002957 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.002971 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.002981 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:33Z","lastTransitionTime":"2026-01-27T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.104971 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.105033 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.105042 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.105055 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.105067 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:33Z","lastTransitionTime":"2026-01-27T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.123744 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.141996 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.156626 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.172053 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.188026 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.202886 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.207341 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.207384 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.207406 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.207431 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.207440 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:33Z","lastTransitionTime":"2026-01-27T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.222087 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.234038 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.245079 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.253664 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 22:02:01.046094402 +0000 UTC Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.256477 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.268181 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.278486 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.293381 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:33 crc kubenswrapper[4914]: E0127 13:44:33.293529 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.298135 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.309548 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.309603 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.309612 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.309628 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.309639 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:33Z","lastTransitionTime":"2026-01-27T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.312870 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.326734 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.347446 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.364652 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.377987 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.390184 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.401156 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.413756 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.413814 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.413857 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.413875 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.413885 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:33Z","lastTransitionTime":"2026-01-27T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.416479 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.428999 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.443985 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.459541 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.476579 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.489365 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.502277 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.513391 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.515866 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.515889 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.515899 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.515915 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.515926 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:33Z","lastTransitionTime":"2026-01-27T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.524484 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.533105 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerStarted","Data":"cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10"} Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.535361 4914 generic.go:334] "Generic (PLEG): container finished" podID="0669c8c6-fa51-4aab-bf05-50f96cd91035" containerID="d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8" exitCode=0 Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.535427 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" event={"ID":"0669c8c6-fa51-4aab-bf05-50f96cd91035","Type":"ContainerDied","Data":"d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8"} Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.536425 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5vprj" event={"ID":"afff8b35-f3f4-4f13-a19d-cb318f982fbc","Type":"ContainerStarted","Data":"5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e"} Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.539393 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.558294 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.570403 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.584646 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.598486 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.610594 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.618459 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.618487 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.618495 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.618507 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.618515 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:33Z","lastTransitionTime":"2026-01-27T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.630623 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.643925 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.657426 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.671014 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.683379 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.695040 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.704749 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.721020 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.721071 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.721084 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.721100 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.721113 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:33Z","lastTransitionTime":"2026-01-27T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.724376 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.737780 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.756394 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.770994 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.787266 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.801715 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.815702 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.826349 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.828394 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.828454 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.828465 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.828482 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.828495 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:33Z","lastTransitionTime":"2026-01-27T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.845471 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.856878 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.870094 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.882618 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.900891 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.913030 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.924927 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.930621 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.930672 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.930686 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.930705 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.930718 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:33Z","lastTransitionTime":"2026-01-27T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.935053 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.949307 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:33 crc kubenswrapper[4914]: I0127 13:44:33.962812 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.034200 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.034245 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.034255 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.034271 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.034281 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:34Z","lastTransitionTime":"2026-01-27T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.137107 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.137158 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.137173 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.137191 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.137205 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:34Z","lastTransitionTime":"2026-01-27T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.239555 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.239603 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.239614 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.239630 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.239641 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:34Z","lastTransitionTime":"2026-01-27T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.254030 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:05:55.790806038 +0000 UTC Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.293733 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.293762 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:34 crc kubenswrapper[4914]: E0127 13:44:34.293907 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:34 crc kubenswrapper[4914]: E0127 13:44:34.293983 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.341815 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.341876 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.341889 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.341906 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.341921 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:34Z","lastTransitionTime":"2026-01-27T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.444131 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.444167 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.444177 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.444206 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.444215 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:34Z","lastTransitionTime":"2026-01-27T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.542567 4914 generic.go:334] "Generic (PLEG): container finished" podID="0669c8c6-fa51-4aab-bf05-50f96cd91035" containerID="c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c" exitCode=0 Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.542652 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" event={"ID":"0669c8c6-fa51-4aab-bf05-50f96cd91035","Type":"ContainerDied","Data":"c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c"} Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.545591 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.545626 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.545637 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.545653 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.545668 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:34Z","lastTransitionTime":"2026-01-27T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.559978 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.577021 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.592442 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.610772 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.631725 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.646170 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.647892 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.647929 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.647941 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.647957 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.647968 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:34Z","lastTransitionTime":"2026-01-27T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.667260 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.686207 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.706823 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.721358 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.733243 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.747264 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.751217 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.751263 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.751275 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.751290 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.751303 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:34Z","lastTransitionTime":"2026-01-27T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.761677 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.776584 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.793027 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.854220 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.854256 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.854269 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.854284 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.854296 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:34Z","lastTransitionTime":"2026-01-27T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.956931 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.956997 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.957014 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.957037 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:34 crc kubenswrapper[4914]: I0127 13:44:34.957052 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:34Z","lastTransitionTime":"2026-01-27T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.060752 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.060797 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.060806 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.060822 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.060852 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:35Z","lastTransitionTime":"2026-01-27T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.163298 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.163330 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.163340 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.163355 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.163366 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:35Z","lastTransitionTime":"2026-01-27T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.254737 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:35:05.998307993 +0000 UTC Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.267258 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.267327 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.267342 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.267372 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.267387 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:35Z","lastTransitionTime":"2026-01-27T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.294087 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:35 crc kubenswrapper[4914]: E0127 13:44:35.295087 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.342184 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.342231 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.342243 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.342264 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.342289 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:35Z","lastTransitionTime":"2026-01-27T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:35 crc kubenswrapper[4914]: E0127 13:44:35.355753 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.359457 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.359490 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.359501 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.359516 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.359528 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:35Z","lastTransitionTime":"2026-01-27T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:35 crc kubenswrapper[4914]: E0127 13:44:35.370812 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.374537 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.374566 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.374575 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.374588 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.374596 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:35Z","lastTransitionTime":"2026-01-27T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:35 crc kubenswrapper[4914]: E0127 13:44:35.389280 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.394517 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.394550 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.394560 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.394577 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.394588 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:35Z","lastTransitionTime":"2026-01-27T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:35 crc kubenswrapper[4914]: E0127 13:44:35.412407 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.416608 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.416650 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.416662 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.416681 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.416695 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:35Z","lastTransitionTime":"2026-01-27T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:35 crc kubenswrapper[4914]: E0127 13:44:35.440274 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: E0127 13:44:35.440442 4914 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.442102 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.442213 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.442229 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.442244 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.442255 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:35Z","lastTransitionTime":"2026-01-27T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.545781 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.546022 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.546114 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.546244 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.546479 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:35Z","lastTransitionTime":"2026-01-27T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.552796 4914 generic.go:334] "Generic (PLEG): container finished" podID="0669c8c6-fa51-4aab-bf05-50f96cd91035" containerID="2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8" exitCode=0 Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.552869 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" event={"ID":"0669c8c6-fa51-4aab-bf05-50f96cd91035","Type":"ContainerDied","Data":"2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8"} Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.563606 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerStarted","Data":"3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983"} Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.564102 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.564135 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.573752 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.596016 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.598240 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.599178 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.610330 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.623353 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.638402 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.651271 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.651309 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.651320 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.651336 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.651347 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:35Z","lastTransitionTime":"2026-01-27T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.651375 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.666422 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.681545 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.694925 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.709589 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.725900 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.742005 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.754122 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.754160 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.754171 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.754188 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.754201 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:35Z","lastTransitionTime":"2026-01-27T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.766277 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.782991 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.799624 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.813401 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.827358 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.857342 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.857412 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.857440 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.857462 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.857477 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:35Z","lastTransitionTime":"2026-01-27T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.872314 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.895705 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.917393 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.942400 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.960805 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.960873 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.960886 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.960906 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.960919 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:35Z","lastTransitionTime":"2026-01-27T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.960782 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.979865 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:35 crc kubenswrapper[4914]: I0127 13:44:35.993979 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.009533 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.030064 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.047087 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.063596 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.063646 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.063660 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.063682 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.063696 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:36Z","lastTransitionTime":"2026-01-27T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.069211 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.085413 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.107369 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.166357 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.166397 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.166409 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.166426 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.166439 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:36Z","lastTransitionTime":"2026-01-27T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.257631 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:36:03.650594004 +0000 UTC Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.268276 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.268311 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.268333 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.268371 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.268382 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:36Z","lastTransitionTime":"2026-01-27T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.294047 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.294108 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:36 crc kubenswrapper[4914]: E0127 13:44:36.294184 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:36 crc kubenswrapper[4914]: E0127 13:44:36.294255 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.371409 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.371447 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.371458 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.371476 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.371488 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:36Z","lastTransitionTime":"2026-01-27T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.475189 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.475226 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.475235 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.475250 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.475261 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:36Z","lastTransitionTime":"2026-01-27T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.574464 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" event={"ID":"0669c8c6-fa51-4aab-bf05-50f96cd91035","Type":"ContainerStarted","Data":"3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190"} Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.574556 4914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.577301 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.577327 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.577337 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.577353 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.577362 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:36Z","lastTransitionTime":"2026-01-27T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.589866 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.609046 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.623577 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.638145 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.650713 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.661148 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.674747 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.679110 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.679160 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.679170 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.679188 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.679199 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:36Z","lastTransitionTime":"2026-01-27T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.686658 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.701760 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.714330 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.733965 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.749112 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.763637 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.774642 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.782796 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.783458 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.783568 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.783696 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.783798 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:36Z","lastTransitionTime":"2026-01-27T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.786879 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.888809 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.888891 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.888902 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.888916 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.888926 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:36Z","lastTransitionTime":"2026-01-27T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.947323 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l"] Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.948523 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.950995 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.952585 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.970423 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.985916 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.992455 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.992496 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.992506 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.992528 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:36 crc kubenswrapper[4914]: I0127 13:44:36.992541 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:36Z","lastTransitionTime":"2026-01-27T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.002241 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:36Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.025549 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.040200 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.056552 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.069800 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.080912 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb7m9\" (UniqueName: \"kubernetes.io/projected/ab2c7833-d799-431e-a4dc-d6790e7c732b-kube-api-access-qb7m9\") pod \"ovnkube-control-plane-749d76644c-lhm6l\" (UID: \"ab2c7833-d799-431e-a4dc-d6790e7c732b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.080979 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab2c7833-d799-431e-a4dc-d6790e7c732b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lhm6l\" (UID: \"ab2c7833-d799-431e-a4dc-d6790e7c732b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.081002 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab2c7833-d799-431e-a4dc-d6790e7c732b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lhm6l\" (UID: \"ab2c7833-d799-431e-a4dc-d6790e7c732b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.081045 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab2c7833-d799-431e-a4dc-d6790e7c732b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lhm6l\" (UID: \"ab2c7833-d799-431e-a4dc-d6790e7c732b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.092474 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.094894 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.094946 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.094959 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.094980 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.094992 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:37Z","lastTransitionTime":"2026-01-27T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.112005 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.127933 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.145954 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.159963 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.173267 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.181462 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab2c7833-d799-431e-a4dc-d6790e7c732b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lhm6l\" (UID: \"ab2c7833-d799-431e-a4dc-d6790e7c732b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.181503 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab2c7833-d799-431e-a4dc-d6790e7c732b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lhm6l\" (UID: \"ab2c7833-d799-431e-a4dc-d6790e7c732b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.181533 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab2c7833-d799-431e-a4dc-d6790e7c732b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lhm6l\" (UID: \"ab2c7833-d799-431e-a4dc-d6790e7c732b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.181563 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb7m9\" (UniqueName: \"kubernetes.io/projected/ab2c7833-d799-431e-a4dc-d6790e7c732b-kube-api-access-qb7m9\") pod \"ovnkube-control-plane-749d76644c-lhm6l\" (UID: \"ab2c7833-d799-431e-a4dc-d6790e7c732b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.182654 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab2c7833-d799-431e-a4dc-d6790e7c732b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lhm6l\" (UID: \"ab2c7833-d799-431e-a4dc-d6790e7c732b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.182695 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab2c7833-d799-431e-a4dc-d6790e7c732b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lhm6l\" (UID: \"ab2c7833-d799-431e-a4dc-d6790e7c732b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.185659 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.187702 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab2c7833-d799-431e-a4dc-d6790e7c732b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lhm6l\" (UID: \"ab2c7833-d799-431e-a4dc-d6790e7c732b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.196926 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.197354 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.197369 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.197384 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.197396 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:37Z","lastTransitionTime":"2026-01-27T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.197020 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb7m9\" (UniqueName: \"kubernetes.io/projected/ab2c7833-d799-431e-a4dc-d6790e7c732b-kube-api-access-qb7m9\") pod \"ovnkube-control-plane-749d76644c-lhm6l\" (UID: \"ab2c7833-d799-431e-a4dc-d6790e7c732b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.198303 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.207112 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.258367 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 00:33:30.394671977 +0000 UTC Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.265970 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" Jan 27 13:44:37 crc kubenswrapper[4914]: W0127 13:44:37.285130 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2c7833_d799_431e_a4dc_d6790e7c732b.slice/crio-f596949619659f8817265b5d68796e04ef8e942da5ff6ecee543be43b8ee8360 WatchSource:0}: Error finding container f596949619659f8817265b5d68796e04ef8e942da5ff6ecee543be43b8ee8360: Status 404 returned error can't find the container with id f596949619659f8817265b5d68796e04ef8e942da5ff6ecee543be43b8ee8360 Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.293570 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:37 crc kubenswrapper[4914]: E0127 13:44:37.293693 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.299478 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.299516 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.299523 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.299539 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.299548 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:37Z","lastTransitionTime":"2026-01-27T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.401877 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.401922 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.401937 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.401955 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.401966 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:37Z","lastTransitionTime":"2026-01-27T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.503948 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.503989 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.504000 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.504015 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.504025 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:37Z","lastTransitionTime":"2026-01-27T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.584677 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" event={"ID":"ab2c7833-d799-431e-a4dc-d6790e7c732b","Type":"ContainerStarted","Data":"501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234"} Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.584715 4914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.584949 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" event={"ID":"ab2c7833-d799-431e-a4dc-d6790e7c732b","Type":"ContainerStarted","Data":"f596949619659f8817265b5d68796e04ef8e942da5ff6ecee543be43b8ee8360"} Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.607522 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.608120 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.608135 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.608151 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.608162 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:37Z","lastTransitionTime":"2026-01-27T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.709988 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.710031 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.710041 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.710056 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.710067 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:37Z","lastTransitionTime":"2026-01-27T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.812665 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.812699 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.812709 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.812722 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.812731 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:37Z","lastTransitionTime":"2026-01-27T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.915291 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.915336 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.915370 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.915384 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:37 crc kubenswrapper[4914]: I0127 13:44:37.915393 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:37Z","lastTransitionTime":"2026-01-27T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.017770 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.017819 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.017845 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.017864 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.017876 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:38Z","lastTransitionTime":"2026-01-27T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.120901 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.120967 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.120978 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.121016 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.121029 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:38Z","lastTransitionTime":"2026-01-27T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.223931 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.223966 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.223974 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.223988 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.223997 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:38Z","lastTransitionTime":"2026-01-27T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.259319 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:39:21.564653257 +0000 UTC Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.294332 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.294435 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:38 crc kubenswrapper[4914]: E0127 13:44:38.294570 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:38 crc kubenswrapper[4914]: E0127 13:44:38.294748 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.327425 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.327462 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.327489 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.327506 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.327516 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:38Z","lastTransitionTime":"2026-01-27T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.430367 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.430395 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.430403 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.430434 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.430451 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:38Z","lastTransitionTime":"2026-01-27T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.532441 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.532499 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.532517 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.532542 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.532560 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:38Z","lastTransitionTime":"2026-01-27T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.589071 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" event={"ID":"ab2c7833-d799-431e-a4dc-d6790e7c732b","Type":"ContainerStarted","Data":"89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b"} Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.600605 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:38Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.612168 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:38Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.624776 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:38Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.634888 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.634916 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.634947 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.634963 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.634973 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:38Z","lastTransitionTime":"2026-01-27T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.637072 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:38Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.649889 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:38Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.661766 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:38Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.683608 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:38Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.693172 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:38Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.708718 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:38Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.725710 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:38Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.737377 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.737420 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.737431 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.737445 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.737456 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:38Z","lastTransitionTime":"2026-01-27T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.746006 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:38Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.761666 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:38Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.772119 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:38Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.781075 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:38Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.793611 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:38Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.805645 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:38Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.840691 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.841938 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.841961 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.841975 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.841985 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:38Z","lastTransitionTime":"2026-01-27T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.943820 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.943891 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.943903 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.943939 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:38 crc kubenswrapper[4914]: I0127 13:44:38.943951 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:38Z","lastTransitionTime":"2026-01-27T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.046222 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.046261 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.046276 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.046298 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.046308 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:39Z","lastTransitionTime":"2026-01-27T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.148488 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.148738 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.148810 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.148938 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.149010 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:39Z","lastTransitionTime":"2026-01-27T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.213178 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-22nld"] Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.213676 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:39 crc kubenswrapper[4914]: E0127 13:44:39.213740 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.240075 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.251522 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.251555 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.251566 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.251583 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.251595 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:39Z","lastTransitionTime":"2026-01-27T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.253485 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.259934 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 22:04:07.376053154 +0000 UTC Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.266093 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.279048 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.293457 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:39 crc kubenswrapper[4914]: E0127 13:44:39.293689 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.294733 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.305084 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d74rh\" (UniqueName: \"kubernetes.io/projected/72d4d49f-291e-448e-81eb-0895324cd4ae-kube-api-access-d74rh\") pod \"network-metrics-daemon-22nld\" (UID: \"72d4d49f-291e-448e-81eb-0895324cd4ae\") " pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.305342 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs\") pod \"network-metrics-daemon-22nld\" (UID: \"72d4d49f-291e-448e-81eb-0895324cd4ae\") " pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.308233 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.318913 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.337489 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.354550 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.354617 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.354633 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.354656 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.354670 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:39Z","lastTransitionTime":"2026-01-27T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.355939 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.378649 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.393598 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.406889 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs\") pod \"network-metrics-daemon-22nld\" (UID: \"72d4d49f-291e-448e-81eb-0895324cd4ae\") " pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.407001 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d74rh\" (UniqueName: \"kubernetes.io/projected/72d4d49f-291e-448e-81eb-0895324cd4ae-kube-api-access-d74rh\") pod \"network-metrics-daemon-22nld\" (UID: \"72d4d49f-291e-448e-81eb-0895324cd4ae\") " pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:39 crc kubenswrapper[4914]: E0127 13:44:39.407184 4914 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:44:39 crc kubenswrapper[4914]: E0127 13:44:39.407330 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs podName:72d4d49f-291e-448e-81eb-0895324cd4ae nodeName:}" failed. No retries permitted until 2026-01-27 13:44:39.907298188 +0000 UTC m=+38.219648313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs") pod "network-metrics-daemon-22nld" (UID: "72d4d49f-291e-448e-81eb-0895324cd4ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.410764 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.421987 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.426771 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d74rh\" (UniqueName: \"kubernetes.io/projected/72d4d49f-291e-448e-81eb-0895324cd4ae-kube-api-access-d74rh\") pod \"network-metrics-daemon-22nld\" (UID: \"72d4d49f-291e-448e-81eb-0895324cd4ae\") " pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.437212 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.451172 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.461589 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.461625 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.461634 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.461647 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.461657 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:39Z","lastTransitionTime":"2026-01-27T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.462699 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.478078 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.564559 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.564599 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.564613 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.564645 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.564664 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:39Z","lastTransitionTime":"2026-01-27T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.594785 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovnkube-controller/0.log" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.598462 4914 generic.go:334] "Generic (PLEG): container finished" podID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerID="3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983" exitCode=1 Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.598555 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerDied","Data":"3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983"} Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.599509 4914 scope.go:117] "RemoveContainer" containerID="3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.624142 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.637212 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.651418 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.663260 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.670389 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.670422 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.670431 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.670444 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.670453 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:39Z","lastTransitionTime":"2026-01-27T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.675038 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.688956 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.698350 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.712570 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.727018 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.744891 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI0127 13:44:38.379226 6159 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 13:44:38.379256 6159 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.379350 6159 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.379428 6159 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.379632 6159 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.380079 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 13:44:38.380092 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 13:44:38.380123 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 13:44:38.380156 6159 factory.go:656] Stopping watch factory\\\\nI0127 13:44:38.380170 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0127 13:44:38.380182 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 13:44:38.380194 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.758576 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.770822 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.772379 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.772401 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.772410 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.772442 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.772452 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:39Z","lastTransitionTime":"2026-01-27T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.781659 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.790156 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.800982 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.809392 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.818168 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:39Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.876041 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.876083 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.876092 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.876105 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.876114 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:39Z","lastTransitionTime":"2026-01-27T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.916633 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs\") pod \"network-metrics-daemon-22nld\" (UID: \"72d4d49f-291e-448e-81eb-0895324cd4ae\") " pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:39 crc kubenswrapper[4914]: E0127 13:44:39.916858 4914 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:44:39 crc kubenswrapper[4914]: E0127 13:44:39.916944 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs podName:72d4d49f-291e-448e-81eb-0895324cd4ae nodeName:}" failed. No retries permitted until 2026-01-27 13:44:40.916925774 +0000 UTC m=+39.229275859 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs") pod "network-metrics-daemon-22nld" (UID: "72d4d49f-291e-448e-81eb-0895324cd4ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.978407 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.978459 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.978476 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.978496 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:39 crc kubenswrapper[4914]: I0127 13:44:39.978512 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:39Z","lastTransitionTime":"2026-01-27T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.085920 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.085964 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.085974 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.085991 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.086004 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:40Z","lastTransitionTime":"2026-01-27T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.117704 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.117856 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:44:56.117825772 +0000 UTC m=+54.430175857 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.117911 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.117951 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.118021 4914 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.118057 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:56.118050299 +0000 UTC m=+54.430400384 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.118093 4914 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.118139 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:56.118124861 +0000 UTC m=+54.430474956 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.187671 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.187705 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.187713 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.187729 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.187738 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:40Z","lastTransitionTime":"2026-01-27T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.219308 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.219412 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.219552 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.219563 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.219603 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.219616 4914 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.219666 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:56.219650788 +0000 UTC m=+54.532000873 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.219575 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.219714 4914 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.219775 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 13:44:56.219754031 +0000 UTC m=+54.532104206 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.301032 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 20:16:58.130710663 +0000 UTC Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.301657 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.301693 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.301756 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.301906 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.302592 4914 scope.go:117] "RemoveContainer" containerID="fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.303230 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.303261 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.303272 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.303289 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.303301 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:40Z","lastTransitionTime":"2026-01-27T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.406147 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.406184 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.406193 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.406207 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.406216 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:40Z","lastTransitionTime":"2026-01-27T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.508381 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.508425 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.508437 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.508452 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.508464 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:40Z","lastTransitionTime":"2026-01-27T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.603467 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.605449 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b"} Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.605867 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.608256 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovnkube-controller/0.log" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.609889 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.609914 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.609925 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.609940 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.609958 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:40Z","lastTransitionTime":"2026-01-27T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.611231 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerStarted","Data":"874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c"} Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.611314 4914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.624879 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.641747 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.657945 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.668954 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.686406 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.698773 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.709442 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.711896 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.711923 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.711933 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.711949 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.711959 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:40Z","lastTransitionTime":"2026-01-27T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.728873 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.742570 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.756187 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.768416 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.782053 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.795790 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.807685 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.819629 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.819669 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.819681 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.819697 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.819709 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:40Z","lastTransitionTime":"2026-01-27T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.828819 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.841120 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.858793 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI0127 13:44:38.379226 6159 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 13:44:38.379256 6159 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.379350 6159 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.379428 6159 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.379632 6159 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.380079 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 13:44:38.380092 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 13:44:38.380123 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 13:44:38.380156 6159 factory.go:656] Stopping watch factory\\\\nI0127 13:44:38.380170 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0127 13:44:38.380182 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 13:44:38.380194 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.876284 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.888259 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.903590 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.915164 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.922101 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.922134 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.922172 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.922187 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.922196 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:40Z","lastTransitionTime":"2026-01-27T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.929726 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs\") pod \"network-metrics-daemon-22nld\" (UID: \"72d4d49f-291e-448e-81eb-0895324cd4ae\") " pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.929849 4914 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:44:40 crc kubenswrapper[4914]: E0127 13:44:40.929898 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs podName:72d4d49f-291e-448e-81eb-0895324cd4ae nodeName:}" failed. No retries permitted until 2026-01-27 13:44:42.929885363 +0000 UTC m=+41.242235448 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs") pod "network-metrics-daemon-22nld" (UID: "72d4d49f-291e-448e-81eb-0895324cd4ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.932583 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI0127 13:44:38.379226 6159 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 13:44:38.379256 6159 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.379350 6159 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.379428 6159 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.379632 6159 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.380079 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 13:44:38.380092 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 13:44:38.380123 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 13:44:38.380156 6159 factory.go:656] Stopping watch factory\\\\nI0127 13:44:38.380170 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0127 13:44:38.380182 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 13:44:38.380194 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.957230 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.979175 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:40 crc kubenswrapper[4914]: I0127 13:44:40.996123 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.006918 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.019158 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.024000 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.024175 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.024248 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.024315 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.024376 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:41Z","lastTransitionTime":"2026-01-27T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.031894 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.042877 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.061590 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.076480 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.089262 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.100740 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.121160 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.126791 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.127028 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.127117 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.127188 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.127254 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:41Z","lastTransitionTime":"2026-01-27T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.229857 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.230166 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.230240 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.230388 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.230463 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:41Z","lastTransitionTime":"2026-01-27T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.294220 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.294246 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:41 crc kubenswrapper[4914]: E0127 13:44:41.294355 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:41 crc kubenswrapper[4914]: E0127 13:44:41.294451 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.301947 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 10:35:39.80325165 +0000 UTC Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.332811 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.332862 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.332870 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.332887 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.332898 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:41Z","lastTransitionTime":"2026-01-27T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.435369 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.435422 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.435436 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.435454 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.435469 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:41Z","lastTransitionTime":"2026-01-27T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.537247 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.537303 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.537321 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.537346 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.537362 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:41Z","lastTransitionTime":"2026-01-27T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.616770 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovnkube-controller/1.log" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.617414 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovnkube-controller/0.log" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.620408 4914 generic.go:334] "Generic (PLEG): container finished" podID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerID="874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c" exitCode=1 Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.620476 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerDied","Data":"874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c"} Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.620577 4914 scope.go:117] "RemoveContainer" containerID="3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.621961 4914 scope.go:117] "RemoveContainer" containerID="874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c" Jan 27 13:44:41 crc kubenswrapper[4914]: E0127 13:44:41.622339 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.637843 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.639037 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.639066 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.639074 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.639087 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.639096 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:41Z","lastTransitionTime":"2026-01-27T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.652244 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.663584 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.684281 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.697876 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.713225 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.725879 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.737857 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.741722 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.741788 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.741802 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.741842 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.741860 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:41Z","lastTransitionTime":"2026-01-27T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.754889 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.767476 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.784422 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.804304 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.822522 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI0127 13:44:38.379226 6159 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 13:44:38.379256 6159 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.379350 6159 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.379428 6159 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.379632 6159 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.380079 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 13:44:38.380092 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 13:44:38.380123 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 13:44:38.380156 6159 factory.go:656] Stopping watch factory\\\\nI0127 13:44:38.380170 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0127 13:44:38.380182 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 13:44:38.380194 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:44:40Z\\\",\\\"message\\\":\\\"ervices.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 13:44:40.628225 6391 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 13:44:40.628420 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:44:40.628368 63\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.837401 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.844462 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.844517 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.844527 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.844541 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.844551 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:41Z","lastTransitionTime":"2026-01-27T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.851723 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.862268 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.871401 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.946382 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.946434 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.946443 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.946455 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:41 crc kubenswrapper[4914]: I0127 13:44:41.946465 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:41Z","lastTransitionTime":"2026-01-27T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.048526 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.048582 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.048594 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.048615 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.048626 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:42Z","lastTransitionTime":"2026-01-27T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.150474 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.150514 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.150523 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.150538 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.150548 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:42Z","lastTransitionTime":"2026-01-27T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.252721 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.252757 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.252766 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.252779 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.252788 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:42Z","lastTransitionTime":"2026-01-27T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.293494 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.293529 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:42 crc kubenswrapper[4914]: E0127 13:44:42.293778 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:42 crc kubenswrapper[4914]: E0127 13:44:42.293643 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.302980 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 20:01:09.230015869 +0000 UTC Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.306211 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.321766 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.337819 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.352880 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.355931 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.355973 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.355983 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.355999 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.356010 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:42Z","lastTransitionTime":"2026-01-27T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.376252 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.388065 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.397491 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.410314 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.422856 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.434235 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.450360 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.458176 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.458209 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.458219 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.458234 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.458244 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:42Z","lastTransitionTime":"2026-01-27T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.462635 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.476033 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.488422 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.505855 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed73d93ebf2f084c9809bb02e2f2d42f9d5174d5b6d6d46806f2edd0dc97983\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI0127 13:44:38.379226 6159 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 13:44:38.379256 6159 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.379350 6159 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.379428 6159 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.379632 6159 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 13:44:38.380079 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 13:44:38.380092 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 13:44:38.380123 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 13:44:38.380156 6159 factory.go:656] Stopping watch factory\\\\nI0127 13:44:38.380170 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0127 13:44:38.380182 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 13:44:38.380194 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:44:40Z\\\",\\\"message\\\":\\\"ervices.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 13:44:40.628225 6391 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 13:44:40.628420 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:44:40.628368 63\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.518785 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.529650 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:42Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.561335 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.561383 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.561394 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.561413 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.561423 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:42Z","lastTransitionTime":"2026-01-27T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.626185 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovnkube-controller/1.log" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.663173 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.663475 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.663586 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.663665 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.663734 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:42Z","lastTransitionTime":"2026-01-27T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.766267 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.766519 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.766736 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.766843 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.766933 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:42Z","lastTransitionTime":"2026-01-27T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.870040 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.870102 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.870116 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.870134 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.870147 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:42Z","lastTransitionTime":"2026-01-27T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.951019 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs\") pod \"network-metrics-daemon-22nld\" (UID: \"72d4d49f-291e-448e-81eb-0895324cd4ae\") " pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:42 crc kubenswrapper[4914]: E0127 13:44:42.951208 4914 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:44:42 crc kubenswrapper[4914]: E0127 13:44:42.951476 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs podName:72d4d49f-291e-448e-81eb-0895324cd4ae nodeName:}" failed. No retries permitted until 2026-01-27 13:44:46.951451318 +0000 UTC m=+45.263801403 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs") pod "network-metrics-daemon-22nld" (UID: "72d4d49f-291e-448e-81eb-0895324cd4ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.972242 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.972277 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.972285 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.972297 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:42 crc kubenswrapper[4914]: I0127 13:44:42.972306 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:42Z","lastTransitionTime":"2026-01-27T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.463621 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:37:09.688893958 +0000 UTC Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.468157 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:44 crc kubenswrapper[4914]: E0127 13:44:44.470522 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.468737 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:44 crc kubenswrapper[4914]: E0127 13:44:44.471131 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.469351 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.468196 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.469205 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.471453 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.471466 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.471480 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.471490 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:44Z","lastTransitionTime":"2026-01-27T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:44 crc kubenswrapper[4914]: E0127 13:44:44.471674 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:44 crc kubenswrapper[4914]: E0127 13:44:44.471792 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.573747 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.574251 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.574329 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.574398 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.574459 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:44Z","lastTransitionTime":"2026-01-27T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.677624 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.677964 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.678063 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.678149 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.678238 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:44Z","lastTransitionTime":"2026-01-27T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.780616 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.780868 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.780949 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.781014 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.781073 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:44Z","lastTransitionTime":"2026-01-27T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.883481 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.883757 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.883877 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.883980 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.884079 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:44Z","lastTransitionTime":"2026-01-27T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.986701 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.986744 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.986755 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.986769 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:44 crc kubenswrapper[4914]: I0127 13:44:44.986778 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:44Z","lastTransitionTime":"2026-01-27T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.088708 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.088749 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.088757 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.088771 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.088781 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:45Z","lastTransitionTime":"2026-01-27T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.191397 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.191486 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.191513 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.191534 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.191550 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:45Z","lastTransitionTime":"2026-01-27T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.294596 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.294653 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.294663 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.294711 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.294728 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:45Z","lastTransitionTime":"2026-01-27T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.397513 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.397546 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.397554 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.397566 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.397575 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:45Z","lastTransitionTime":"2026-01-27T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.471171 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 07:14:03.290097803 +0000 UTC Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.500177 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.500230 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.500240 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.500252 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.500262 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:45Z","lastTransitionTime":"2026-01-27T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.602741 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.602793 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.602804 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.602824 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.602860 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:45Z","lastTransitionTime":"2026-01-27T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.616883 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.616924 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.616941 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.616961 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.616974 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:45Z","lastTransitionTime":"2026-01-27T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:45 crc kubenswrapper[4914]: E0127 13:44:45.636728 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:45Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.640673 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.640756 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.640768 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.640787 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.640799 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:45Z","lastTransitionTime":"2026-01-27T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:45 crc kubenswrapper[4914]: E0127 13:44:45.660398 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:45Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.664230 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.664316 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.664340 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.664373 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.664397 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:45Z","lastTransitionTime":"2026-01-27T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:45 crc kubenswrapper[4914]: E0127 13:44:45.681716 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:45Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.686278 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.686320 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.686330 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.686349 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.686364 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:45Z","lastTransitionTime":"2026-01-27T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:45 crc kubenswrapper[4914]: E0127 13:44:45.702289 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:45Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.706302 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.706352 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.706368 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.706391 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.706406 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:45Z","lastTransitionTime":"2026-01-27T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:45 crc kubenswrapper[4914]: E0127 13:44:45.721418 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:45Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:45 crc kubenswrapper[4914]: E0127 13:44:45.721569 4914 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.726019 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.726051 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.726062 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.726080 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.726093 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:45Z","lastTransitionTime":"2026-01-27T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.830593 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.830652 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.830667 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.830689 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.830705 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:45Z","lastTransitionTime":"2026-01-27T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.933056 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.933097 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.933107 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.933119 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:45 crc kubenswrapper[4914]: I0127 13:44:45.933129 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:45Z","lastTransitionTime":"2026-01-27T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.035532 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.035824 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.035935 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.036029 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.036114 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:46Z","lastTransitionTime":"2026-01-27T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.139495 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.139538 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.139547 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.139562 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.139571 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:46Z","lastTransitionTime":"2026-01-27T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.242072 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.242114 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.242131 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.242148 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.242158 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:46Z","lastTransitionTime":"2026-01-27T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.293903 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.293946 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.293984 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.294014 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:46 crc kubenswrapper[4914]: E0127 13:44:46.294155 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:44:46 crc kubenswrapper[4914]: E0127 13:44:46.294203 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:46 crc kubenswrapper[4914]: E0127 13:44:46.294251 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:46 crc kubenswrapper[4914]: E0127 13:44:46.294326 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.344139 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.344183 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.344196 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.344217 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.344231 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:46Z","lastTransitionTime":"2026-01-27T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.446253 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.446302 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.446313 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.446331 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.446346 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:46Z","lastTransitionTime":"2026-01-27T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.471890 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 18:52:57.91639318 +0000 UTC Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.552423 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.552463 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.552476 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.552500 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.552515 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:46Z","lastTransitionTime":"2026-01-27T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.655107 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.655149 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.655159 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.655174 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.655183 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:46Z","lastTransitionTime":"2026-01-27T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.757537 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.757584 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.757596 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.757615 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.757627 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:46Z","lastTransitionTime":"2026-01-27T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.859723 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.859764 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.859772 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.859788 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.859798 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:46Z","lastTransitionTime":"2026-01-27T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.962107 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.962159 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.962170 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.962185 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:46 crc kubenswrapper[4914]: I0127 13:44:46.962196 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:46Z","lastTransitionTime":"2026-01-27T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.002087 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs\") pod \"network-metrics-daemon-22nld\" (UID: \"72d4d49f-291e-448e-81eb-0895324cd4ae\") " pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:47 crc kubenswrapper[4914]: E0127 13:44:47.002289 4914 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:44:47 crc kubenswrapper[4914]: E0127 13:44:47.002397 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs podName:72d4d49f-291e-448e-81eb-0895324cd4ae nodeName:}" failed. No retries permitted until 2026-01-27 13:44:55.002374379 +0000 UTC m=+53.314724484 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs") pod "network-metrics-daemon-22nld" (UID: "72d4d49f-291e-448e-81eb-0895324cd4ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.065012 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.065057 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.065069 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.065084 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.065095 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:47Z","lastTransitionTime":"2026-01-27T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.166870 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.166921 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.166933 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.166949 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.166960 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:47Z","lastTransitionTime":"2026-01-27T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.269226 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.269269 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.269297 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.269313 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.269323 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:47Z","lastTransitionTime":"2026-01-27T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.371142 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.371252 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.371265 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.371283 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.371295 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:47Z","lastTransitionTime":"2026-01-27T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.472334 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:13:02.783038329 +0000 UTC Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.474565 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.474622 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.474634 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.474652 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.474664 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:47Z","lastTransitionTime":"2026-01-27T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.577309 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.577365 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.577382 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.577397 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.577409 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:47Z","lastTransitionTime":"2026-01-27T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.678964 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.679001 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.679010 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.679026 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.679036 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:47Z","lastTransitionTime":"2026-01-27T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.781253 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.781294 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.781305 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.781321 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.781332 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:47Z","lastTransitionTime":"2026-01-27T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.884074 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.884137 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.884145 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.884178 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.884190 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:47Z","lastTransitionTime":"2026-01-27T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.987150 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.987190 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.987205 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.987221 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:47 crc kubenswrapper[4914]: I0127 13:44:47.987231 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:47Z","lastTransitionTime":"2026-01-27T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.089565 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.089606 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.089616 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.089629 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.089637 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:48Z","lastTransitionTime":"2026-01-27T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.192461 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.192511 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.192527 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.192546 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.192559 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:48Z","lastTransitionTime":"2026-01-27T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.293306 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.293347 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.293361 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.293320 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:48 crc kubenswrapper[4914]: E0127 13:44:48.293483 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:48 crc kubenswrapper[4914]: E0127 13:44:48.293561 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:48 crc kubenswrapper[4914]: E0127 13:44:48.293652 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:44:48 crc kubenswrapper[4914]: E0127 13:44:48.293707 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.294393 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.294422 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.294430 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.294442 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.294451 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:48Z","lastTransitionTime":"2026-01-27T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.396615 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.396660 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.396669 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.396684 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.396694 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:48Z","lastTransitionTime":"2026-01-27T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.473372 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:48:59.275787625 +0000 UTC Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.499307 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.499369 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.499381 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.499398 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.499410 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:48Z","lastTransitionTime":"2026-01-27T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.603005 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.603069 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.603102 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.603121 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.603134 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:48Z","lastTransitionTime":"2026-01-27T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.705982 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.706026 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.706037 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.706054 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.706065 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:48Z","lastTransitionTime":"2026-01-27T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.807770 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.807819 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.807847 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.807884 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.807899 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:48Z","lastTransitionTime":"2026-01-27T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.909761 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.909801 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.909850 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.909873 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:48 crc kubenswrapper[4914]: I0127 13:44:48.909897 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:48Z","lastTransitionTime":"2026-01-27T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.012175 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.012226 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.012245 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.012263 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.012274 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:49Z","lastTransitionTime":"2026-01-27T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.115496 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.115564 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.115574 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.115592 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.115607 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:49Z","lastTransitionTime":"2026-01-27T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.218521 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.218574 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.218586 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.218604 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.218619 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:49Z","lastTransitionTime":"2026-01-27T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.323592 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.323639 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.323651 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.323667 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.323678 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:49Z","lastTransitionTime":"2026-01-27T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.432776 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.432861 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.432875 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.432892 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.432904 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:49Z","lastTransitionTime":"2026-01-27T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.474464 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 02:28:27.857243185 +0000 UTC Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.534994 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.535353 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.535471 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.535592 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.535694 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:49Z","lastTransitionTime":"2026-01-27T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.638712 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.638780 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.638793 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.638816 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.638852 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:49Z","lastTransitionTime":"2026-01-27T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.740627 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.740674 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.740685 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.740701 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.740712 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:49Z","lastTransitionTime":"2026-01-27T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.843486 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.843537 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.843551 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.843568 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.843578 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:49Z","lastTransitionTime":"2026-01-27T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.921023 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.921855 4914 scope.go:117] "RemoveContainer" containerID="874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c" Jan 27 13:44:49 crc kubenswrapper[4914]: E0127 13:44:49.922045 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.937353 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:49Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.945868 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.945914 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.945926 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.945944 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.945958 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:49Z","lastTransitionTime":"2026-01-27T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.950827 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:49Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.975502 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:44:40Z\\\",\\\"message\\\":\\\"ervices.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 13:44:40.628225 6391 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 13:44:40.628420 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:44:40.628368 63\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:49Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.986212 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:49Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:49 crc kubenswrapper[4914]: I0127 13:44:49.996120 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:49Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.005319 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.021805 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.037078 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.048854 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.049165 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.049194 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.049204 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.049222 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.049259 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:50Z","lastTransitionTime":"2026-01-27T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.061968 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.074377 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.085889 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.098705 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.109332 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.119793 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.140302 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.151138 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.151184 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.151194 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.151207 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.151218 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:50Z","lastTransitionTime":"2026-01-27T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.153985 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.253975 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.254017 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.254028 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.254042 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.254051 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:50Z","lastTransitionTime":"2026-01-27T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.293589 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.293709 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.293890 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:50 crc kubenswrapper[4914]: E0127 13:44:50.293887 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.293944 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:50 crc kubenswrapper[4914]: E0127 13:44:50.294017 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:50 crc kubenswrapper[4914]: E0127 13:44:50.294123 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:44:50 crc kubenswrapper[4914]: E0127 13:44:50.294206 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.356641 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.356693 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.356707 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.356728 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.356740 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:50Z","lastTransitionTime":"2026-01-27T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.459525 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.459578 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.459595 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.459615 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.459634 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:50Z","lastTransitionTime":"2026-01-27T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.474991 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:27:06.232527869 +0000 UTC Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.562327 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.562375 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.562384 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.562399 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.562408 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:50Z","lastTransitionTime":"2026-01-27T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.664570 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.664619 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.664631 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.664646 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.664662 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:50Z","lastTransitionTime":"2026-01-27T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.766663 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.766707 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.766716 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.766731 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.766743 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:50Z","lastTransitionTime":"2026-01-27T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.869587 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.869648 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.869657 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.869671 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.869680 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:50Z","lastTransitionTime":"2026-01-27T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.971756 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.971800 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.971810 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.971842 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:50 crc kubenswrapper[4914]: I0127 13:44:50.971853 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:50Z","lastTransitionTime":"2026-01-27T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.074191 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.074244 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.074252 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.074264 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.074274 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:51Z","lastTransitionTime":"2026-01-27T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.176784 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.176854 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.176865 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.176880 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.176889 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:51Z","lastTransitionTime":"2026-01-27T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.278967 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.279002 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.279013 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.279028 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.279039 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:51Z","lastTransitionTime":"2026-01-27T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.384784 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.384851 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.384864 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.384880 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.384891 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:51Z","lastTransitionTime":"2026-01-27T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.475530 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:38:22.340722863 +0000 UTC Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.496263 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.496300 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.496310 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.496326 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.496337 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:51Z","lastTransitionTime":"2026-01-27T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.599090 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.599150 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.599162 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.599178 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.599189 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:51Z","lastTransitionTime":"2026-01-27T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.702020 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.702054 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.702063 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.702080 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.702090 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:51Z","lastTransitionTime":"2026-01-27T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.803964 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.804033 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.804046 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.804060 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.804068 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:51Z","lastTransitionTime":"2026-01-27T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.906639 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.906679 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.906686 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.906704 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:51 crc kubenswrapper[4914]: I0127 13:44:51.906716 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:51Z","lastTransitionTime":"2026-01-27T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.008807 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.008865 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.008877 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.008891 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.008901 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:52Z","lastTransitionTime":"2026-01-27T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.111505 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.111541 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.111549 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.111564 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.111572 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:52Z","lastTransitionTime":"2026-01-27T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.214479 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.214529 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.214538 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.214561 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.214576 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:52Z","lastTransitionTime":"2026-01-27T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.293918 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.294002 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:52 crc kubenswrapper[4914]: E0127 13:44:52.294070 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.294146 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:52 crc kubenswrapper[4914]: E0127 13:44:52.294213 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.294261 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:52 crc kubenswrapper[4914]: E0127 13:44:52.294317 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:52 crc kubenswrapper[4914]: E0127 13:44:52.294397 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.309552 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.316588 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.316628 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.316639 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.316655 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.316668 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:52Z","lastTransitionTime":"2026-01-27T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.322490 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.333705 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.343709 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.356153 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.368630 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.382047 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.394772 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.413575 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.418460 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.418493 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.418502 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.418515 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.418524 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:52Z","lastTransitionTime":"2026-01-27T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.432733 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.451748 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.465625 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.475703 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 04:54:45.603026641 +0000 UTC Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.482798 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:44:40Z\\\",\\\"message\\\":\\\"ervices.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 13:44:40.628225 6391 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 13:44:40.628420 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:44:40.628368 63\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.496347 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.507998 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.520571 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.520626 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.520642 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.520657 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.520666 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:52Z","lastTransitionTime":"2026-01-27T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.524224 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.536590 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.623023 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.623055 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.623063 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.623077 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.623086 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:52Z","lastTransitionTime":"2026-01-27T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.725747 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.725800 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.725813 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.725861 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.725874 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:52Z","lastTransitionTime":"2026-01-27T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.828083 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.828117 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.828125 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.828141 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.828156 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:52Z","lastTransitionTime":"2026-01-27T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.933970 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.934009 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.934018 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.934031 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:52 crc kubenswrapper[4914]: I0127 13:44:52.934043 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:52Z","lastTransitionTime":"2026-01-27T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.036332 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.036371 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.036382 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.036399 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.036409 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:53Z","lastTransitionTime":"2026-01-27T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.139149 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.139244 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.139257 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.139296 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.139308 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:53Z","lastTransitionTime":"2026-01-27T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.241919 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.241959 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.241976 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.241992 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.242003 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:53Z","lastTransitionTime":"2026-01-27T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.344870 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.344921 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.344929 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.344943 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.344953 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:53Z","lastTransitionTime":"2026-01-27T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.447374 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.447433 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.447451 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.447468 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.447481 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:53Z","lastTransitionTime":"2026-01-27T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.476091 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 22:59:36.771312208 +0000 UTC Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.550396 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.550445 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.550457 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.550473 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.550484 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:53Z","lastTransitionTime":"2026-01-27T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.652702 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.652746 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.652756 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.652770 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.652779 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:53Z","lastTransitionTime":"2026-01-27T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.755150 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.755196 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.755209 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.755225 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.755236 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:53Z","lastTransitionTime":"2026-01-27T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.857186 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.857222 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.857237 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.857251 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.857262 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:53Z","lastTransitionTime":"2026-01-27T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.959219 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.959268 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.959285 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.959301 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:53 crc kubenswrapper[4914]: I0127 13:44:53.959310 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:53Z","lastTransitionTime":"2026-01-27T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.062139 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.062183 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.062191 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.062204 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.062213 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:54Z","lastTransitionTime":"2026-01-27T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.164450 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.164491 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.164502 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.164518 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.164530 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:54Z","lastTransitionTime":"2026-01-27T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.267319 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.267391 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.267412 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.267433 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.267450 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:54Z","lastTransitionTime":"2026-01-27T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.293893 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.294130 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.294221 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:54 crc kubenswrapper[4914]: E0127 13:44:54.294376 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.294430 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:54 crc kubenswrapper[4914]: E0127 13:44:54.294927 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:54 crc kubenswrapper[4914]: E0127 13:44:54.295197 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:54 crc kubenswrapper[4914]: E0127 13:44:54.295287 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.369698 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.369736 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.369747 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.369763 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.369774 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:54Z","lastTransitionTime":"2026-01-27T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.472435 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.472481 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.472490 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.472504 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.472518 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:54Z","lastTransitionTime":"2026-01-27T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.476557 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:34:33.810792944 +0000 UTC Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.574454 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.574491 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.574498 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.574514 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.574525 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:54Z","lastTransitionTime":"2026-01-27T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.677127 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.677189 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.677210 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.677232 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.677248 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:54Z","lastTransitionTime":"2026-01-27T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.779465 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.779509 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.779535 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.779552 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.779562 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:54Z","lastTransitionTime":"2026-01-27T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.882393 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.882446 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.882457 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.882475 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.882491 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:54Z","lastTransitionTime":"2026-01-27T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.985098 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.985135 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.985152 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.985171 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:54 crc kubenswrapper[4914]: I0127 13:44:54.985184 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:54Z","lastTransitionTime":"2026-01-27T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.080121 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs\") pod \"network-metrics-daemon-22nld\" (UID: \"72d4d49f-291e-448e-81eb-0895324cd4ae\") " pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:55 crc kubenswrapper[4914]: E0127 13:44:55.080282 4914 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:44:55 crc kubenswrapper[4914]: E0127 13:44:55.080339 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs podName:72d4d49f-291e-448e-81eb-0895324cd4ae nodeName:}" failed. No retries permitted until 2026-01-27 13:45:11.080320232 +0000 UTC m=+69.392670317 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs") pod "network-metrics-daemon-22nld" (UID: "72d4d49f-291e-448e-81eb-0895324cd4ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.087779 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.087867 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.087878 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.087892 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.087904 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:55Z","lastTransitionTime":"2026-01-27T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.190886 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.191160 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.191234 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.191322 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.191407 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:55Z","lastTransitionTime":"2026-01-27T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.294780 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.294826 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.294861 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.294879 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.294892 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:55Z","lastTransitionTime":"2026-01-27T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.397970 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.398272 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.398395 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.398489 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.398565 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:55Z","lastTransitionTime":"2026-01-27T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.477586 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:34:56.778876786 +0000 UTC Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.500431 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.500467 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.500478 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.500492 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.500503 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:55Z","lastTransitionTime":"2026-01-27T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.603186 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.604027 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.604062 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.604083 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.604097 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:55Z","lastTransitionTime":"2026-01-27T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.706187 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.706231 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.706241 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.706289 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.706304 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:55Z","lastTransitionTime":"2026-01-27T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.809092 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.809141 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.809154 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.809173 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.809188 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:55Z","lastTransitionTime":"2026-01-27T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.912352 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.912386 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.912396 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.912411 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:55 crc kubenswrapper[4914]: I0127 13:44:55.912422 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:55Z","lastTransitionTime":"2026-01-27T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.005045 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.005089 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.005097 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.005113 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.005124 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:56Z","lastTransitionTime":"2026-01-27T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.018335 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.023153 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.023223 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.023240 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.023255 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.023265 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:56Z","lastTransitionTime":"2026-01-27T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.038892 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.044608 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.044675 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.044687 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.044703 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.044716 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:56Z","lastTransitionTime":"2026-01-27T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.056709 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.060420 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.060448 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.060455 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.060469 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.060477 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:56Z","lastTransitionTime":"2026-01-27T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.071171 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.073960 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.073998 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.074010 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.074025 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.074036 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:56Z","lastTransitionTime":"2026-01-27T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.087495 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.087635 4914 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.089088 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.089129 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.089142 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.089157 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.089168 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:56Z","lastTransitionTime":"2026-01-27T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.191475 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.191539 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.191551 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.191569 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.191583 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:56Z","lastTransitionTime":"2026-01-27T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.192254 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.192387 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.192427 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.192512 4914 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.192516 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:45:28.192462086 +0000 UTC m=+86.504812191 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.192571 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:45:28.192546989 +0000 UTC m=+86.504897084 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.192580 4914 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.192673 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:45:28.192654572 +0000 UTC m=+86.505004677 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.293118 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.293311 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.293330 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.293342 4914 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.293383 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.293441 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.293450 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.293457 4914 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.293496 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 13:45:28.293483507 +0000 UTC m=+86.605833592 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.293866 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 13:45:28.293825918 +0000 UTC m=+86.606176003 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.294469 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.294551 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.294600 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.294641 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.294679 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.294718 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.294780 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:56 crc kubenswrapper[4914]: E0127 13:44:56.294931 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.295100 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.295122 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.295132 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.295145 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.295156 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:56Z","lastTransitionTime":"2026-01-27T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.396927 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.396967 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.396978 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.396992 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.397004 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:56Z","lastTransitionTime":"2026-01-27T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.478066 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:28:32.108667865 +0000 UTC Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.499616 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.499664 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.499674 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.499689 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.499699 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:56Z","lastTransitionTime":"2026-01-27T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.602242 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.602278 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.602287 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.602302 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.602315 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:56Z","lastTransitionTime":"2026-01-27T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.705251 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.705280 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.705288 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.705300 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.705309 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:56Z","lastTransitionTime":"2026-01-27T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.808070 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.808129 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.808150 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.808172 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.808183 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:56Z","lastTransitionTime":"2026-01-27T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.910130 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.910184 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.910199 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.910219 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:56 crc kubenswrapper[4914]: I0127 13:44:56.910229 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:56Z","lastTransitionTime":"2026-01-27T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.013300 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.013354 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.013371 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.013393 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.013410 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:57Z","lastTransitionTime":"2026-01-27T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.115591 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.115622 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.115632 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.115657 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.115669 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:57Z","lastTransitionTime":"2026-01-27T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.217437 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.217477 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.217489 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.217505 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.217517 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:57Z","lastTransitionTime":"2026-01-27T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.319823 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.319906 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.319922 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.319942 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.319952 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:57Z","lastTransitionTime":"2026-01-27T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.421764 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.421788 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.421796 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.421807 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.421816 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:57Z","lastTransitionTime":"2026-01-27T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.478561 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 00:33:55.442873703 +0000 UTC Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.524585 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.524643 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.524659 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.524708 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.524722 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:57Z","lastTransitionTime":"2026-01-27T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.628381 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.628438 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.628453 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.628475 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.628525 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:57Z","lastTransitionTime":"2026-01-27T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.731456 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.731514 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.731523 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.731536 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.731547 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:57Z","lastTransitionTime":"2026-01-27T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.834409 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.834480 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.834505 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.834525 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.834539 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:57Z","lastTransitionTime":"2026-01-27T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.937847 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.937896 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.937909 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.937925 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:57 crc kubenswrapper[4914]: I0127 13:44:57.937937 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:57Z","lastTransitionTime":"2026-01-27T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.040997 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.041035 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.041044 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.041061 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.041076 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:58Z","lastTransitionTime":"2026-01-27T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.139199 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.144233 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.144276 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.144288 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.144306 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.144318 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:58Z","lastTransitionTime":"2026-01-27T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.148565 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.160611 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.173027 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.188174 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.199300 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.212155 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.223250 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.232200 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.246937 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.246964 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.246972 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.246985 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.246994 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:58Z","lastTransitionTime":"2026-01-27T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.253123 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.266095 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.285758 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:44:40Z\\\",\\\"message\\\":\\\"ervices.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 13:44:40.628225 6391 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 13:44:40.628420 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:44:40.628368 63\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.294171 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.294208 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.294172 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.294171 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:44:58 crc kubenswrapper[4914]: E0127 13:44:58.294329 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:44:58 crc kubenswrapper[4914]: E0127 13:44:58.294448 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:44:58 crc kubenswrapper[4914]: E0127 13:44:58.294519 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:44:58 crc kubenswrapper[4914]: E0127 13:44:58.294628 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.302056 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.315923 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.332062 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.344993 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.349473 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.349594 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.350009 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.350065 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.350081 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:58Z","lastTransitionTime":"2026-01-27T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.361387 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.385733 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.399113 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.453776 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.453824 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.453873 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.453896 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.453909 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:58Z","lastTransitionTime":"2026-01-27T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.478733 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:49:04.065297446 +0000 UTC Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.556803 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.556896 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.556924 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.556954 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.556974 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:58Z","lastTransitionTime":"2026-01-27T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.659600 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.659649 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.659663 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.659682 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.659697 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:58Z","lastTransitionTime":"2026-01-27T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.762257 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.762291 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.762300 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.762315 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.762324 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:58Z","lastTransitionTime":"2026-01-27T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.865149 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.865211 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.865229 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.865253 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.865274 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:58Z","lastTransitionTime":"2026-01-27T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.968475 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.968809 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.969013 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.969391 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:58 crc kubenswrapper[4914]: I0127 13:44:58.969549 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:58Z","lastTransitionTime":"2026-01-27T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.072580 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.072623 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.072631 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.072648 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.072656 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:59Z","lastTransitionTime":"2026-01-27T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.175245 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.175300 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.175316 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.175337 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.175352 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:59Z","lastTransitionTime":"2026-01-27T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.277856 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.277924 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.277937 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.277968 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.277982 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:59Z","lastTransitionTime":"2026-01-27T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.381324 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.381393 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.381405 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.381423 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.381436 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:59Z","lastTransitionTime":"2026-01-27T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.478952 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:10:35.424125477 +0000 UTC Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.483966 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.484041 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.484058 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.484083 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.484099 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:59Z","lastTransitionTime":"2026-01-27T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.516441 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.531880 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.542259 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.552715 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.564316 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae2dacff-90b1-4fc6-8d50-2114afdd4916\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b755de4898a164ac9577098c88092f6c7110e89fb07df414a9a1b7010598fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a165c9c9662f86e2a4031b517de6ca06d8fd0d5b946e8e777c4aff95601706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a33f463899dafa02c42203728101686cc7171af93777439ce8bbf7feb13fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.581818 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.586387 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.586445 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.586457 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.586482 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.586495 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:59Z","lastTransitionTime":"2026-01-27T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.594080 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.612564 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.624761 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.635667 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.648673 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.663246 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.679713 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.688795 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.688857 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.688870 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.688886 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.688897 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:59Z","lastTransitionTime":"2026-01-27T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.693985 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.710061 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:44:40Z\\\",\\\"message\\\":\\\"ervices.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 13:44:40.628225 6391 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 13:44:40.628420 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:44:40.628368 63\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.722157 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.738084 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.748342 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.757681 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:59Z is after 2025-08-24T17:21:41Z" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.791609 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.791642 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.791650 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.791666 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.791675 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:59Z","lastTransitionTime":"2026-01-27T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.894788 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.894821 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.894850 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.894868 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.894879 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:59Z","lastTransitionTime":"2026-01-27T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.996738 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.996804 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.996818 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.996860 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:44:59 crc kubenswrapper[4914]: I0127 13:44:59.996875 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:44:59Z","lastTransitionTime":"2026-01-27T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.099106 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.099150 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.099159 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.099172 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.099181 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:00Z","lastTransitionTime":"2026-01-27T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.202438 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.202683 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.202804 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.202956 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.203058 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:00Z","lastTransitionTime":"2026-01-27T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.294000 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.294012 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:00 crc kubenswrapper[4914]: E0127 13:45:00.294143 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.294016 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:00 crc kubenswrapper[4914]: E0127 13:45:00.294311 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:00 crc kubenswrapper[4914]: E0127 13:45:00.294406 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.294708 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:00 crc kubenswrapper[4914]: E0127 13:45:00.294954 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.305741 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.305982 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.306106 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.306207 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.306295 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:00Z","lastTransitionTime":"2026-01-27T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.408733 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.408964 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.409026 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.409112 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.409208 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:00Z","lastTransitionTime":"2026-01-27T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.479419 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:13:48.134998759 +0000 UTC Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.511529 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.511615 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.511631 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.511653 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.511669 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:00Z","lastTransitionTime":"2026-01-27T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.614890 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.615398 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.615479 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.615552 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.615631 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:00Z","lastTransitionTime":"2026-01-27T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.718207 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.718264 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.718273 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.718286 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.718294 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:00Z","lastTransitionTime":"2026-01-27T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.821133 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.821177 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.821186 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.821201 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.821211 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:00Z","lastTransitionTime":"2026-01-27T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.923014 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.923078 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.923089 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.923103 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:00 crc kubenswrapper[4914]: I0127 13:45:00.923113 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:00Z","lastTransitionTime":"2026-01-27T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.025161 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.025187 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.025197 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.025211 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.025221 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:01Z","lastTransitionTime":"2026-01-27T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.127586 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.127626 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.127634 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.127648 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.127658 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:01Z","lastTransitionTime":"2026-01-27T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.237560 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.237605 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.237614 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.237629 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.237640 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:01Z","lastTransitionTime":"2026-01-27T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.339497 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.339525 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.339535 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.339548 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.339558 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:01Z","lastTransitionTime":"2026-01-27T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.442173 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.442224 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.442236 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.442253 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.442272 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:01Z","lastTransitionTime":"2026-01-27T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.480545 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:07:19.516887186 +0000 UTC Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.544354 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.544415 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.544426 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.544441 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.544451 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:01Z","lastTransitionTime":"2026-01-27T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.646568 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.646641 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.646655 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.646672 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.646685 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:01Z","lastTransitionTime":"2026-01-27T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.748720 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.748757 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.748766 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.748778 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.748788 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:01Z","lastTransitionTime":"2026-01-27T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.850949 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.851580 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.851677 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.851859 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.851975 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:01Z","lastTransitionTime":"2026-01-27T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.954538 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.954573 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.954604 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.954622 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:01 crc kubenswrapper[4914]: I0127 13:45:01.954633 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:01Z","lastTransitionTime":"2026-01-27T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.057797 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.057869 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.057886 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.057903 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.057913 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:02Z","lastTransitionTime":"2026-01-27T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.159795 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.160110 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.160190 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.160293 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.160380 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:02Z","lastTransitionTime":"2026-01-27T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.262932 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.262973 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.262984 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.262998 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.263008 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:02Z","lastTransitionTime":"2026-01-27T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.293788 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.293805 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.293909 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.294418 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:02 crc kubenswrapper[4914]: E0127 13:45:02.294522 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:02 crc kubenswrapper[4914]: E0127 13:45:02.294624 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:02 crc kubenswrapper[4914]: E0127 13:45:02.294786 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:02 crc kubenswrapper[4914]: E0127 13:45:02.294905 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.295327 4914 scope.go:117] "RemoveContainer" containerID="874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.309165 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.326165 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.341333 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.352392 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.365744 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.365798 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.365811 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.365848 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.365860 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:02Z","lastTransitionTime":"2026-01-27T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.366416 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.379012 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.391286 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.404471 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.415328 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae2dacff-90b1-4fc6-8d50-2114afdd4916\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b755de4898a164ac9577098c88092f6c7110e89fb07df414a9a1b7010598fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a165c9c9662f86e2a4031b517de6ca06d8fd0d5b946e8e777c4aff95601706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a33f463899dafa02c42203728101686cc7171af93777439ce8bbf7feb13fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.435006 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.447893 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.460340 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.468865 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.468900 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.468908 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.468923 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.468949 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:02Z","lastTransitionTime":"2026-01-27T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.472397 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.481464 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 00:01:42.419472447 +0000 UTC Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.488684 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:44:40Z\\\",\\\"message\\\":\\\"ervices.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 13:44:40.628225 6391 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 13:44:40.628420 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:44:40.628368 63\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.501429 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.512396 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.525430 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.530038 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovnkube-controller/1.log" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.531894 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerStarted","Data":"f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1"} Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.532270 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.538792 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.548877 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.558530 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae2dacff-90b1-4fc6-8d50-2114afdd4916\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b755de4898a164ac9577098c88092f6c7110e89fb07df414a9a1b7010598fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a165c9c9662f86e2a4031b517de6ca06d8fd0d5b946e8e777c4aff95601706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a33f463899dafa02c42203728101686cc7171af93777439ce8bbf7feb13fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.580403 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.580439 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.580450 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.580465 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.580477 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:02Z","lastTransitionTime":"2026-01-27T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.595005 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.606358 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.618668 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.629975 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.649016 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:44:40Z\\\",\\\"message\\\":\\\"ervices.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 13:44:40.628225 6391 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 13:44:40.628420 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:44:40.628368 63\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.661234 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.672095 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.682921 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.682980 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.682991 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.683003 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.683011 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:02Z","lastTransitionTime":"2026-01-27T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.689479 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.700618 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.715301 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.728237 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.744006 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.754205 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.774776 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.786362 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.786410 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.786422 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.786441 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.786454 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:02Z","lastTransitionTime":"2026-01-27T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.787105 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.801267 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:02Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.889077 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.889118 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.889128 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.889173 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.889185 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:02Z","lastTransitionTime":"2026-01-27T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.992408 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.992455 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.992464 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.992478 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:02 crc kubenswrapper[4914]: I0127 13:45:02.992489 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:02Z","lastTransitionTime":"2026-01-27T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.094714 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.094778 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.094806 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.094821 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.094833 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:03Z","lastTransitionTime":"2026-01-27T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.196947 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.196986 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.196995 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.197007 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.197016 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:03Z","lastTransitionTime":"2026-01-27T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.299271 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.299642 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.299660 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.299684 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.299697 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:03Z","lastTransitionTime":"2026-01-27T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.403049 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.403107 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.403122 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.403141 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.403155 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:03Z","lastTransitionTime":"2026-01-27T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.482255 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:48:24.289841256 +0000 UTC Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.505869 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.505909 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.505918 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.505936 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.505945 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:03Z","lastTransitionTime":"2026-01-27T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.536754 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovnkube-controller/2.log" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.537455 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovnkube-controller/1.log" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.539548 4914 generic.go:334] "Generic (PLEG): container finished" podID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerID="f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1" exitCode=1 Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.539600 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerDied","Data":"f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1"} Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.539648 4914 scope.go:117] "RemoveContainer" containerID="874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.540203 4914 scope.go:117] "RemoveContainer" containerID="f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1" Jan 27 13:45:03 crc kubenswrapper[4914]: E0127 13:45:03.540385 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.555571 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.571930 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae2dacff-90b1-4fc6-8d50-2114afdd4916\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b755de4898a164ac9577098c88092f6c7110e89fb07df414a9a1b7010598fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a165c9c9662f86e2a4031b517de6ca06d8fd0d5b946e8e777c4aff95601706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a33f463899dafa02c42203728101686cc7171af93777439ce8bbf7feb13fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.591787 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.606158 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.608695 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.608798 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.608816 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.608832 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.608842 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:03Z","lastTransitionTime":"2026-01-27T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.620120 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.634881 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.659104 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://874bc389225d9bfcaec3a672e4ecdd16877f105538256623983dab2007966d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:44:40Z\\\",\\\"message\\\":\\\"ervices.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 13:44:40.628225 6391 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 13:44:40.628420 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:44:40Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:44:40.628368 63\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:45:03Z\\\",\\\"message\\\":\\\"t added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:45:03.181100 6635 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181644 6635 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181595 6635 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-gnhrd\\\\nI0127 13:45:03.181653 6635 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-5vprj in node crc\\\\nI0127 13:45:03.181659 6635 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-5vprj after 0 failed attempt(s)\\\\nI0127 13:45:03.181666 6635 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181523 6635 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.673322 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.684951 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.702635 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.713050 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.713102 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.713116 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.713141 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.713154 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:03Z","lastTransitionTime":"2026-01-27T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.719483 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.737037 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.750192 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.762445 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.775417 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.790835 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.805832 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.820203 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.820298 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.820593 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.820616 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.820660 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:03Z","lastTransitionTime":"2026-01-27T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.821622 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.924250 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.924309 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.924323 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.924338 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:03 crc kubenswrapper[4914]: I0127 13:45:03.924348 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:03Z","lastTransitionTime":"2026-01-27T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.026793 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.026901 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.026914 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.026932 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.026946 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:04Z","lastTransitionTime":"2026-01-27T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.130409 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.130442 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.130450 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.130463 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.130472 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:04Z","lastTransitionTime":"2026-01-27T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.232898 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.232934 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.232944 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.232960 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.232971 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:04Z","lastTransitionTime":"2026-01-27T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.293876 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.293963 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.293986 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.293876 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:04 crc kubenswrapper[4914]: E0127 13:45:04.294042 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:04 crc kubenswrapper[4914]: E0127 13:45:04.294117 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:04 crc kubenswrapper[4914]: E0127 13:45:04.294200 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:04 crc kubenswrapper[4914]: E0127 13:45:04.294297 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.335578 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.335633 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.335643 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.335660 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.335674 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:04Z","lastTransitionTime":"2026-01-27T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.438242 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.438279 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.438290 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.438303 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.438312 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:04Z","lastTransitionTime":"2026-01-27T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.483555 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 05:11:11.513492257 +0000 UTC Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.540935 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.540963 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.540972 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.540986 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.540995 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:04Z","lastTransitionTime":"2026-01-27T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.542532 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovnkube-controller/2.log" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.545675 4914 scope.go:117] "RemoveContainer" containerID="f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1" Jan 27 13:45:04 crc kubenswrapper[4914]: E0127 13:45:04.545968 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.558418 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.570717 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.581038 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.591249 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.600749 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.612633 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae2dacff-90b1-4fc6-8d50-2114afdd4916\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b755de4898a164ac9577098c88092f6c7110e89fb07df414a9a1b7010598fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a165c9c9662f86e2a4031b517de6ca06d8fd0d5b946e8e777c4aff95601706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a33f463899dafa02c42203728101686cc7171af93777439ce8bbf7feb13fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.634632 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.642445 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.642477 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.642486 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.642498 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.642508 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:04Z","lastTransitionTime":"2026-01-27T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.646140 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.659021 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.673287 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.692061 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:45:03Z\\\",\\\"message\\\":\\\"t added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:45:03.181100 6635 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181644 6635 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181595 6635 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-gnhrd\\\\nI0127 13:45:03.181653 6635 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-5vprj in node crc\\\\nI0127 13:45:03.181659 6635 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-5vprj after 0 failed attempt(s)\\\\nI0127 13:45:03.181666 6635 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181523 6635 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:45:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.706538 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.719145 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.735875 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.744872 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.744897 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.744905 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.744917 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.744925 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:04Z","lastTransitionTime":"2026-01-27T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.755515 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.769894 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.785875 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.797974 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:04Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.847608 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.847645 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.847683 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.847698 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.847709 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:04Z","lastTransitionTime":"2026-01-27T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.950354 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.950406 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.950417 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.950435 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:04 crc kubenswrapper[4914]: I0127 13:45:04.950447 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:04Z","lastTransitionTime":"2026-01-27T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.052567 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.052602 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.052610 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.052623 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.052632 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:05Z","lastTransitionTime":"2026-01-27T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.155328 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.155360 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.155370 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.155385 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.155396 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:05Z","lastTransitionTime":"2026-01-27T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.258743 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.258808 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.258862 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.258990 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.259057 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:05Z","lastTransitionTime":"2026-01-27T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.362256 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.362302 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.362313 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.362330 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.362340 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:05Z","lastTransitionTime":"2026-01-27T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.469205 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.469245 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.469256 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.469273 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.469285 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:05Z","lastTransitionTime":"2026-01-27T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.485342 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:29:34.124560893 +0000 UTC Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.571293 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.571336 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.571345 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.571359 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.571369 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:05Z","lastTransitionTime":"2026-01-27T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.673853 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.673900 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.673930 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.673944 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.673953 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:05Z","lastTransitionTime":"2026-01-27T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.776492 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.776539 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.776547 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.776560 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.776571 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:05Z","lastTransitionTime":"2026-01-27T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.878789 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.878893 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.878907 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.878925 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.878936 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:05Z","lastTransitionTime":"2026-01-27T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.981088 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.981131 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.981140 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.981155 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:05 crc kubenswrapper[4914]: I0127 13:45:05.981167 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:05Z","lastTransitionTime":"2026-01-27T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.083319 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.083380 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.083390 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.083405 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.083416 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:06Z","lastTransitionTime":"2026-01-27T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.185750 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.185796 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.185807 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.185824 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.185838 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:06Z","lastTransitionTime":"2026-01-27T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.289004 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.289079 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.289091 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.289104 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.289114 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:06Z","lastTransitionTime":"2026-01-27T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.293431 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.293494 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:06 crc kubenswrapper[4914]: E0127 13:45:06.293602 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.293435 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:06 crc kubenswrapper[4914]: E0127 13:45:06.293749 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:06 crc kubenswrapper[4914]: E0127 13:45:06.293890 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.294038 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:06 crc kubenswrapper[4914]: E0127 13:45:06.294097 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.308286 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.308332 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.308345 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.308364 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.308377 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:06Z","lastTransitionTime":"2026-01-27T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:06 crc kubenswrapper[4914]: E0127 13:45:06.322718 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:06Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.327385 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.327426 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.327434 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.327449 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.327458 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:06Z","lastTransitionTime":"2026-01-27T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:06 crc kubenswrapper[4914]: E0127 13:45:06.340358 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:06Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.345627 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.345679 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.345692 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.345712 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.345725 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:06Z","lastTransitionTime":"2026-01-27T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:06 crc kubenswrapper[4914]: E0127 13:45:06.359095 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:06Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.362887 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.362936 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.362951 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.362977 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.362993 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:06Z","lastTransitionTime":"2026-01-27T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:06 crc kubenswrapper[4914]: E0127 13:45:06.376005 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:06Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.380303 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.380369 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.380386 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.380410 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.380429 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:06Z","lastTransitionTime":"2026-01-27T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:06 crc kubenswrapper[4914]: E0127 13:45:06.392634 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:06Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:06 crc kubenswrapper[4914]: E0127 13:45:06.392754 4914 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.394206 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.394237 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.394248 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.394264 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.394274 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:06Z","lastTransitionTime":"2026-01-27T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.486049 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 03:22:32.484810807 +0000 UTC Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.497040 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.497081 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.497092 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.497110 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.497122 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:06Z","lastTransitionTime":"2026-01-27T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.599093 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.599126 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.599150 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.599164 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.599175 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:06Z","lastTransitionTime":"2026-01-27T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.701151 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.701193 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.701205 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.701224 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.701237 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:06Z","lastTransitionTime":"2026-01-27T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.803789 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.803821 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.803852 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.803868 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.803878 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:06Z","lastTransitionTime":"2026-01-27T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.906355 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.906390 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.906401 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.906420 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:06 crc kubenswrapper[4914]: I0127 13:45:06.906431 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:06Z","lastTransitionTime":"2026-01-27T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.008698 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.008730 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.008741 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.008756 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.008769 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:07Z","lastTransitionTime":"2026-01-27T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.112071 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.112130 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.112142 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.112160 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.112174 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:07Z","lastTransitionTime":"2026-01-27T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.214522 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.214816 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.214936 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.215154 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.215371 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:07Z","lastTransitionTime":"2026-01-27T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.317488 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.317718 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.317819 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.317923 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.318015 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:07Z","lastTransitionTime":"2026-01-27T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.420606 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.421007 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.421094 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.421205 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.421267 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:07Z","lastTransitionTime":"2026-01-27T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.487223 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:06:52.911858842 +0000 UTC Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.523154 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.523192 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.523203 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.523218 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.523232 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:07Z","lastTransitionTime":"2026-01-27T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.625196 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.625251 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.625268 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.625287 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.625300 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:07Z","lastTransitionTime":"2026-01-27T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.728176 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.728220 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.728235 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.728254 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.728269 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:07Z","lastTransitionTime":"2026-01-27T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.830535 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.830572 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.830581 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.830596 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.830609 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:07Z","lastTransitionTime":"2026-01-27T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.934120 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.934368 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.934461 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.934543 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:07 crc kubenswrapper[4914]: I0127 13:45:07.934657 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:07Z","lastTransitionTime":"2026-01-27T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.039281 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.039319 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.039327 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.039342 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.039350 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:08Z","lastTransitionTime":"2026-01-27T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.141749 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.141810 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.141824 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.141868 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.141886 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:08Z","lastTransitionTime":"2026-01-27T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.244000 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.244295 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.244369 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.244443 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.244517 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:08Z","lastTransitionTime":"2026-01-27T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.293915 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.293944 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.293956 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.293915 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:08 crc kubenswrapper[4914]: E0127 13:45:08.294036 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:08 crc kubenswrapper[4914]: E0127 13:45:08.294120 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:08 crc kubenswrapper[4914]: E0127 13:45:08.294177 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:08 crc kubenswrapper[4914]: E0127 13:45:08.294190 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.347055 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.347086 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.347097 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.347110 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.347120 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:08Z","lastTransitionTime":"2026-01-27T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.449969 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.450015 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.450024 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.450039 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.450051 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:08Z","lastTransitionTime":"2026-01-27T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.488065 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 01:32:55.296892181 +0000 UTC Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.553064 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.553380 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.553470 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.553574 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.553666 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:08Z","lastTransitionTime":"2026-01-27T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.655813 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.655862 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.655871 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.655883 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.655895 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:08Z","lastTransitionTime":"2026-01-27T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.758481 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.758519 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.758539 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.758554 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.758565 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:08Z","lastTransitionTime":"2026-01-27T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.860599 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.860625 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.860637 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.860651 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.860659 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:08Z","lastTransitionTime":"2026-01-27T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.963133 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.963172 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.963183 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.963199 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:08 crc kubenswrapper[4914]: I0127 13:45:08.963211 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:08Z","lastTransitionTime":"2026-01-27T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.065781 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.065858 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.065874 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.065908 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.065918 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:09Z","lastTransitionTime":"2026-01-27T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.168069 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.168109 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.168119 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.168134 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.168153 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:09Z","lastTransitionTime":"2026-01-27T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.271023 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.271080 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.271114 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.271132 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.271145 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:09Z","lastTransitionTime":"2026-01-27T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.373232 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.373272 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.373286 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.373302 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.373313 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:09Z","lastTransitionTime":"2026-01-27T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.476511 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.476573 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.476588 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.476613 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.476627 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:09Z","lastTransitionTime":"2026-01-27T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.489154 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 16:28:20.701479999 +0000 UTC Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.578979 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.579028 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.579039 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.579057 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.579067 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:09Z","lastTransitionTime":"2026-01-27T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.681770 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.681818 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.681849 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.681869 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.681882 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:09Z","lastTransitionTime":"2026-01-27T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.785129 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.785180 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.785195 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.785219 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.785236 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:09Z","lastTransitionTime":"2026-01-27T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.887621 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.887667 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.887677 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.887694 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.887703 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:09Z","lastTransitionTime":"2026-01-27T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.994007 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.994132 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.994151 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.994287 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:09 crc kubenswrapper[4914]: I0127 13:45:09.994315 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:09Z","lastTransitionTime":"2026-01-27T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.096818 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.096878 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.096889 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.096903 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.096913 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:10Z","lastTransitionTime":"2026-01-27T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.199038 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.199074 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.199085 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.199111 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.199150 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:10Z","lastTransitionTime":"2026-01-27T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.294194 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.294280 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.294327 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:10 crc kubenswrapper[4914]: E0127 13:45:10.294364 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:10 crc kubenswrapper[4914]: E0127 13:45:10.294475 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:10 crc kubenswrapper[4914]: E0127 13:45:10.294575 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.295948 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:10 crc kubenswrapper[4914]: E0127 13:45:10.296043 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.300961 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.300997 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.301009 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.301023 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.301037 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:10Z","lastTransitionTime":"2026-01-27T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.304596 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.403909 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.403949 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.403960 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.403978 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.403992 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:10Z","lastTransitionTime":"2026-01-27T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.490072 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:31:23.403010148 +0000 UTC Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.506273 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.506336 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.506352 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.506376 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.506389 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:10Z","lastTransitionTime":"2026-01-27T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.609015 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.609053 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.609064 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.609080 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.609092 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:10Z","lastTransitionTime":"2026-01-27T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.711080 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.711129 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.711141 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.711157 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.711167 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:10Z","lastTransitionTime":"2026-01-27T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.813150 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.813189 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.813199 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.813213 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.813224 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:10Z","lastTransitionTime":"2026-01-27T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.915633 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.915703 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.915733 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.915749 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:10 crc kubenswrapper[4914]: I0127 13:45:10.915760 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:10Z","lastTransitionTime":"2026-01-27T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.017399 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.017442 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.017453 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.017469 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.017478 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:11Z","lastTransitionTime":"2026-01-27T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.119967 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.120009 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.120019 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.120033 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.120043 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:11Z","lastTransitionTime":"2026-01-27T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.172041 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs\") pod \"network-metrics-daemon-22nld\" (UID: \"72d4d49f-291e-448e-81eb-0895324cd4ae\") " pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:11 crc kubenswrapper[4914]: E0127 13:45:11.172226 4914 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:45:11 crc kubenswrapper[4914]: E0127 13:45:11.172311 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs podName:72d4d49f-291e-448e-81eb-0895324cd4ae nodeName:}" failed. No retries permitted until 2026-01-27 13:45:43.172291715 +0000 UTC m=+101.484641800 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs") pod "network-metrics-daemon-22nld" (UID: "72d4d49f-291e-448e-81eb-0895324cd4ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.222210 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.222278 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.222292 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.222308 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.222345 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:11Z","lastTransitionTime":"2026-01-27T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.324185 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.324227 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.324235 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.324249 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.324259 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:11Z","lastTransitionTime":"2026-01-27T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.426590 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.426634 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.426644 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.426662 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.426674 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:11Z","lastTransitionTime":"2026-01-27T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.490557 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 09:33:55.827831974 +0000 UTC Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.528784 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.528819 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.528830 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.528869 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.528879 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:11Z","lastTransitionTime":"2026-01-27T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.631241 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.631285 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.631298 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.631317 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.631329 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:11Z","lastTransitionTime":"2026-01-27T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.733927 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.733967 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.733976 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.733990 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.733998 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:11Z","lastTransitionTime":"2026-01-27T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.836317 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.836349 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.836356 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.836369 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.836380 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:11Z","lastTransitionTime":"2026-01-27T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.938776 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.938818 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.938868 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.938887 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:11 crc kubenswrapper[4914]: I0127 13:45:11.938898 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:11Z","lastTransitionTime":"2026-01-27T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.042039 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.042087 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.042098 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.042111 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.042121 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:12Z","lastTransitionTime":"2026-01-27T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.144555 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.144594 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.144604 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.144619 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.144631 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:12Z","lastTransitionTime":"2026-01-27T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.246213 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.246258 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.246268 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.246286 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.246298 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:12Z","lastTransitionTime":"2026-01-27T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.294028 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.294043 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:12 crc kubenswrapper[4914]: E0127 13:45:12.294151 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.294169 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:12 crc kubenswrapper[4914]: E0127 13:45:12.294263 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:12 crc kubenswrapper[4914]: E0127 13:45:12.294350 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.295034 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:12 crc kubenswrapper[4914]: E0127 13:45:12.295161 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.307457 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.320196 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.331042 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.341631 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.349126 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.349269 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.349485 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.349593 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.349683 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:12Z","lastTransitionTime":"2026-01-27T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.352800 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae2dacff-90b1-4fc6-8d50-2114afdd4916\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b755de4898a164ac9577098c88092f6c7110e89fb07df414a9a1b7010598fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a165c9c9662f86e2a4031b517de6ca06d8fd0d5b946e8e777c4aff95601706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a33f463899dafa02c42203728101686cc7171af93777439ce8bbf7feb13fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.372874 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.386779 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.402645 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.416977 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.436073 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:45:03Z\\\",\\\"message\\\":\\\"t added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:45:03.181100 6635 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181644 6635 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181595 6635 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-gnhrd\\\\nI0127 13:45:03.181653 6635 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-5vprj in node crc\\\\nI0127 13:45:03.181659 6635 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-5vprj after 0 failed attempt(s)\\\\nI0127 13:45:03.181666 6635 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181523 6635 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:45:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.448469 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef4fd2b2-492b-4146-b888-3afe8d31175a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42bb4f493515c0d2dfc9e64dba4f32177efb75d52cb95be919d2d63b0c4948dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e572c5db026705caec1874b3946e3def2eb193b8aad8d2e199e20dfbbbbd4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e572c5db026705caec1874b3946e3def2eb193b8aad8d2e199e20dfbbbbd4a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.453513 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.453556 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.453569 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.453586 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.453598 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:12Z","lastTransitionTime":"2026-01-27T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.460665 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.478653 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.491167 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 16:02:39.325755952 +0000 UTC Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.491269 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.505642 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.519184 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.530039 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.545410 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.556046 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:12Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.556451 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.556478 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.556487 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.556504 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.556516 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:12Z","lastTransitionTime":"2026-01-27T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.658955 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.658998 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.659009 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.659024 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.659035 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:12Z","lastTransitionTime":"2026-01-27T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.761337 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.761380 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.761391 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.761406 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.761417 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:12Z","lastTransitionTime":"2026-01-27T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.864265 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.864306 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.864317 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.864332 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.864342 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:12Z","lastTransitionTime":"2026-01-27T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.966346 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.966384 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.966395 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.966412 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:12 crc kubenswrapper[4914]: I0127 13:45:12.966424 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:12Z","lastTransitionTime":"2026-01-27T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.069454 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.069488 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.069498 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.069513 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.069523 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:13Z","lastTransitionTime":"2026-01-27T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.172653 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.172716 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.172727 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.172746 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.172763 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:13Z","lastTransitionTime":"2026-01-27T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.275487 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.275523 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.275531 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.275544 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.275553 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:13Z","lastTransitionTime":"2026-01-27T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.377615 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.377647 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.377657 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.377672 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.377684 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:13Z","lastTransitionTime":"2026-01-27T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.479379 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.479424 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.479436 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.479453 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.479474 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:13Z","lastTransitionTime":"2026-01-27T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.492011 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:34:07.918178375 +0000 UTC Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.584781 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.584855 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.584868 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.584885 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.584898 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:13Z","lastTransitionTime":"2026-01-27T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.687824 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.687883 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.687895 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.687911 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.687922 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:13Z","lastTransitionTime":"2026-01-27T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.790549 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.790589 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.790597 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.790633 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.790644 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:13Z","lastTransitionTime":"2026-01-27T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.896896 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.897476 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.897517 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.897536 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:13 crc kubenswrapper[4914]: I0127 13:45:13.897546 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:13Z","lastTransitionTime":"2026-01-27T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.000431 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.000466 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.000477 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.000492 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.000503 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:14Z","lastTransitionTime":"2026-01-27T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.103073 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.103117 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.103125 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.103139 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.103149 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:14Z","lastTransitionTime":"2026-01-27T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.205999 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.206092 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.206108 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.206125 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.206134 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:14Z","lastTransitionTime":"2026-01-27T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.294147 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.294235 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:14 crc kubenswrapper[4914]: E0127 13:45:14.294310 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.294322 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.294359 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:14 crc kubenswrapper[4914]: E0127 13:45:14.294509 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:14 crc kubenswrapper[4914]: E0127 13:45:14.294668 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:14 crc kubenswrapper[4914]: E0127 13:45:14.294786 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.308635 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.308689 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.308701 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.308717 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.309028 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:14Z","lastTransitionTime":"2026-01-27T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.411865 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.411892 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.411901 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.411914 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.411924 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:14Z","lastTransitionTime":"2026-01-27T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.492699 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:56:27.998491992 +0000 UTC Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.514391 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.514413 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.514422 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.514436 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.514445 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:14Z","lastTransitionTime":"2026-01-27T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.616974 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.617026 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.617038 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.617051 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.617059 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:14Z","lastTransitionTime":"2026-01-27T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.719917 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.719958 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.719967 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.719984 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.719997 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:14Z","lastTransitionTime":"2026-01-27T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.822569 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.822623 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.822637 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.822654 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.822666 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:14Z","lastTransitionTime":"2026-01-27T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.925158 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.925198 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.925211 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.925227 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:14 crc kubenswrapper[4914]: I0127 13:45:14.925238 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:14Z","lastTransitionTime":"2026-01-27T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.027036 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.027066 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.027076 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.027088 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.027096 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:15Z","lastTransitionTime":"2026-01-27T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.128675 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.128718 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.128727 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.128741 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.128751 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:15Z","lastTransitionTime":"2026-01-27T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.230917 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.230957 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.230969 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.230984 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.230995 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:15Z","lastTransitionTime":"2026-01-27T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.333777 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.333819 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.333828 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.333864 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.333873 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:15Z","lastTransitionTime":"2026-01-27T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.436109 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.436144 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.436157 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.436170 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.436182 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:15Z","lastTransitionTime":"2026-01-27T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.493371 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 02:25:08.678605069 +0000 UTC Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.538873 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.538910 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.538920 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.538931 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.538941 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:15Z","lastTransitionTime":"2026-01-27T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.640808 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.640892 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.640910 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.640926 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.640935 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:15Z","lastTransitionTime":"2026-01-27T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.743284 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.743329 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.743337 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.743350 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.743359 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:15Z","lastTransitionTime":"2026-01-27T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.846072 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.846114 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.846123 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.846137 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.846147 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:15Z","lastTransitionTime":"2026-01-27T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.948916 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.948972 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.948990 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.949009 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:15 crc kubenswrapper[4914]: I0127 13:45:15.949020 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:15Z","lastTransitionTime":"2026-01-27T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.051656 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.051705 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.051720 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.051737 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.051747 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:16Z","lastTransitionTime":"2026-01-27T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.154259 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.154294 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.154302 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.154317 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.154326 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:16Z","lastTransitionTime":"2026-01-27T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.257117 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.257167 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.257186 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.257207 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.257223 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:16Z","lastTransitionTime":"2026-01-27T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.294035 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.294074 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:16 crc kubenswrapper[4914]: E0127 13:45:16.294186 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.294055 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.294547 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:16 crc kubenswrapper[4914]: E0127 13:45:16.294735 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:16 crc kubenswrapper[4914]: E0127 13:45:16.294900 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:16 crc kubenswrapper[4914]: E0127 13:45:16.295001 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.358696 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.358730 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.358739 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.358751 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.358761 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:16Z","lastTransitionTime":"2026-01-27T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.461768 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.461816 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.461979 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.461998 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.462010 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:16Z","lastTransitionTime":"2026-01-27T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.494322 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 10:27:42.066529081 +0000 UTC Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.565518 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.565579 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.565590 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.565605 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.565614 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:16Z","lastTransitionTime":"2026-01-27T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.581586 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b628_38170a87-0bc0-4c7d-b7a0-45b86a1f79e3/kube-multus/0.log" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.581641 4914 generic.go:334] "Generic (PLEG): container finished" podID="38170a87-0bc0-4c7d-b7a0-45b86a1f79e3" containerID="d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f" exitCode=1 Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.581672 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6b628" event={"ID":"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3","Type":"ContainerDied","Data":"d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f"} Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.582076 4914 scope.go:117] "RemoveContainer" containerID="d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.596616 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.609678 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.621396 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.634123 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.644176 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.644242 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.644253 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.644266 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.644275 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:16Z","lastTransitionTime":"2026-01-27T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.646387 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae2dacff-90b1-4fc6-8d50-2114afdd4916\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b755de4898a164ac9577098c88092f6c7110e89fb07df414a9a1b7010598fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a165c9c9662f86e2a4031b517de6ca06d8fd0d5b946e8e777c4aff95601706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a33f463899dafa02c42203728101686cc7171af93777439ce8bbf7feb13fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: E0127 13:45:16.660467 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.664073 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.664101 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.664109 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.664137 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.664147 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:16Z","lastTransitionTime":"2026-01-27T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.665974 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.675115 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: E0127 13:45:16.677077 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.680229 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.680273 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.680281 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.680294 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.680303 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:16Z","lastTransitionTime":"2026-01-27T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.691053 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: E0127 13:45:16.691080 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.696190 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.696232 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.696243 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.696259 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.696274 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:16Z","lastTransitionTime":"2026-01-27T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:16 crc kubenswrapper[4914]: E0127 13:45:16.706992 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.707247 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"2026-01-27T13:44:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b6e57264-ff00-4537-aa42-4767d683c4f2\\\\n2026-01-27T13:44:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b6e57264-ff00-4537-aa42-4767d683c4f2 to /host/opt/cni/bin/\\\\n2026-01-27T13:44:31Z [verbose] multus-daemon started\\\\n2026-01-27T13:44:31Z [verbose] Readiness Indicator file check\\\\n2026-01-27T13:45:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.710546 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.710581 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.710591 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.710605 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.710615 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:16Z","lastTransitionTime":"2026-01-27T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.726104 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:45:03Z\\\",\\\"message\\\":\\\"t added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:45:03.181100 6635 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181644 6635 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181595 6635 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-gnhrd\\\\nI0127 13:45:03.181653 6635 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-5vprj in node crc\\\\nI0127 13:45:03.181659 6635 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-5vprj after 0 failed attempt(s)\\\\nI0127 13:45:03.181666 6635 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181523 6635 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:45:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: E0127 13:45:16.729294 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: E0127 13:45:16.729565 4914 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.731138 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.731226 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.731287 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.731346 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.731408 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:16Z","lastTransitionTime":"2026-01-27T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.736909 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef4fd2b2-492b-4146-b888-3afe8d31175a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42bb4f493515c0d2dfc9e64dba4f32177efb75d52cb95be919d2d63b0c4948dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e572c5db026705caec1874b3946e3def2eb193b8aad8d2e199e20dfbbbbd4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e572c5db026705caec1874b3946e3def2eb193b8aad8d2e199e20dfbbbbd4a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.747386 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.758757 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.768421 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.779769 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.790802 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.800229 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.822532 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.833350 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.833410 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.833419 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.833432 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.833442 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:16Z","lastTransitionTime":"2026-01-27T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.833818 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:16Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.935196 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.935407 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.935473 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.935575 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:16 crc kubenswrapper[4914]: I0127 13:45:16.935655 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:16Z","lastTransitionTime":"2026-01-27T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.037811 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.038037 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.038098 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.038158 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.038236 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:17Z","lastTransitionTime":"2026-01-27T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.140944 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.141142 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.141199 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.141259 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.141341 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:17Z","lastTransitionTime":"2026-01-27T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.243870 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.244146 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.244218 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.244277 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.244331 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:17Z","lastTransitionTime":"2026-01-27T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.295131 4914 scope.go:117] "RemoveContainer" containerID="f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1" Jan 27 13:45:17 crc kubenswrapper[4914]: E0127 13:45:17.296154 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.346913 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.347164 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.347239 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.347314 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.347428 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:17Z","lastTransitionTime":"2026-01-27T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.450277 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.450314 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.450322 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.450334 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.450343 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:17Z","lastTransitionTime":"2026-01-27T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.495241 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 07:20:14.872303915 +0000 UTC Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.552361 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.552405 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.552419 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.552436 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.552448 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:17Z","lastTransitionTime":"2026-01-27T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.585905 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b628_38170a87-0bc0-4c7d-b7a0-45b86a1f79e3/kube-multus/0.log" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.586213 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6b628" event={"ID":"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3","Type":"ContainerStarted","Data":"cbd46cb10b4609f1c672c23d55dbb211fb5f130fc861dbc837fa5be5b44a2f90"} Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.601863 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae2dacff-90b1-4fc6-8d50-2114afdd4916\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b755de4898a164ac9577098c88092f6c7110e89fb07df414a9a1b7010598fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a165c9c9662f86e2a4031b517de6ca06d8fd0d5b946e8e777c4aff95601706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a33f463899dafa02c42203728101686cc7171af93777439ce8bbf7feb13fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.622228 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.632932 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.646239 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.654498 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.654714 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.654826 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.654958 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.655040 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:17Z","lastTransitionTime":"2026-01-27T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.659241 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.670413 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.682712 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef4fd2b2-492b-4146-b888-3afe8d31175a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42bb4f493515c0d2dfc9e64dba4f32177efb75d52cb95be919d2d63b0c4948dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e572c5db026705caec1874b3946e3def2eb193b8aad8d2e199e20dfbbbbd4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e572c5db026705caec1874b3946e3def2eb193b8aad8d2e199e20dfbbbbd4a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.696083 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.706507 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.720175 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.732766 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbd46cb10b4609f1c672c23d55dbb211fb5f130fc861dbc837fa5be5b44a2f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"2026-01-27T13:44:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b6e57264-ff00-4537-aa42-4767d683c4f2\\\\n2026-01-27T13:44:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b6e57264-ff00-4537-aa42-4767d683c4f2 to /host/opt/cni/bin/\\\\n2026-01-27T13:44:31Z [verbose] multus-daemon started\\\\n2026-01-27T13:44:31Z [verbose] Readiness Indicator file check\\\\n2026-01-27T13:45:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.751052 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:45:03Z\\\",\\\"message\\\":\\\"t added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:45:03.181100 6635 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181644 6635 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181595 6635 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-gnhrd\\\\nI0127 13:45:03.181653 6635 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-5vprj in node crc\\\\nI0127 13:45:03.181659 6635 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-5vprj after 0 failed attempt(s)\\\\nI0127 13:45:03.181666 6635 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181523 6635 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:45:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.757718 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.757769 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.757782 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.757800 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.757813 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:17Z","lastTransitionTime":"2026-01-27T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.765828 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.779374 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.791439 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.801788 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.814244 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.823871 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.832409 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.860088 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.860151 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.860163 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.860179 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.860190 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:17Z","lastTransitionTime":"2026-01-27T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.962618 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.962927 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.963003 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.963096 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:17 crc kubenswrapper[4914]: I0127 13:45:17.963200 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:17Z","lastTransitionTime":"2026-01-27T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.065047 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.065089 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.065098 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.065114 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.065124 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:18Z","lastTransitionTime":"2026-01-27T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.167783 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.167820 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.167845 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.167863 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.167875 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:18Z","lastTransitionTime":"2026-01-27T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.269988 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.270024 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.270035 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.270050 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.270060 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:18Z","lastTransitionTime":"2026-01-27T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.293289 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.293386 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.293433 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:18 crc kubenswrapper[4914]: E0127 13:45:18.293468 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.293508 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:18 crc kubenswrapper[4914]: E0127 13:45:18.293544 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:18 crc kubenswrapper[4914]: E0127 13:45:18.293595 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:18 crc kubenswrapper[4914]: E0127 13:45:18.293768 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.371742 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.372285 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.372397 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.372488 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.372578 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:18Z","lastTransitionTime":"2026-01-27T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.474857 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.474892 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.474903 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.474917 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.474929 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:18Z","lastTransitionTime":"2026-01-27T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.495641 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 16:49:59.621589721 +0000 UTC Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.576806 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.576864 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.576878 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.576896 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.576908 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:18Z","lastTransitionTime":"2026-01-27T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.679643 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.679684 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.679695 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.679782 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.679809 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:18Z","lastTransitionTime":"2026-01-27T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.782276 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.782310 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.782319 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.782334 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.782343 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:18Z","lastTransitionTime":"2026-01-27T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.885145 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.885194 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.885204 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.885219 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.885230 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:18Z","lastTransitionTime":"2026-01-27T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.988536 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.988583 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.988596 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.988621 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:18 crc kubenswrapper[4914]: I0127 13:45:18.988633 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:18Z","lastTransitionTime":"2026-01-27T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.090931 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.090968 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.090977 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.090993 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.091002 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:19Z","lastTransitionTime":"2026-01-27T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.193317 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.193359 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.193375 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.193394 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.193405 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:19Z","lastTransitionTime":"2026-01-27T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.295092 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.295122 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.295131 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.295141 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.295149 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:19Z","lastTransitionTime":"2026-01-27T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.397876 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.397942 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.397952 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.397968 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.397979 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:19Z","lastTransitionTime":"2026-01-27T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.496137 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:41:41.479795957 +0000 UTC Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.499753 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.499790 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.499800 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.499817 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.499851 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:19Z","lastTransitionTime":"2026-01-27T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.602087 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.602120 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.602132 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.602148 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.602159 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:19Z","lastTransitionTime":"2026-01-27T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.704862 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.704911 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.704926 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.704944 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.704959 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:19Z","lastTransitionTime":"2026-01-27T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.807512 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.807543 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.807555 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.807572 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.807583 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:19Z","lastTransitionTime":"2026-01-27T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.910393 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.910432 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.910441 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.910459 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:19 crc kubenswrapper[4914]: I0127 13:45:19.910468 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:19Z","lastTransitionTime":"2026-01-27T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.013139 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.013187 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.013198 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.013215 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.013227 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:20Z","lastTransitionTime":"2026-01-27T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.115668 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.115703 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.115714 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.115733 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.115744 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:20Z","lastTransitionTime":"2026-01-27T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.217805 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.217879 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.217890 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.217906 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.217917 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:20Z","lastTransitionTime":"2026-01-27T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.293292 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.293393 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.293433 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.293341 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:20 crc kubenswrapper[4914]: E0127 13:45:20.293496 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:20 crc kubenswrapper[4914]: E0127 13:45:20.293580 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:20 crc kubenswrapper[4914]: E0127 13:45:20.293794 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:20 crc kubenswrapper[4914]: E0127 13:45:20.293858 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.320363 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.320408 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.320417 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.320431 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.320440 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:20Z","lastTransitionTime":"2026-01-27T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.423312 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.423383 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.423400 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.423424 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.423441 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:20Z","lastTransitionTime":"2026-01-27T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.496286 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:30:00.438385893 +0000 UTC Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.526770 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.526875 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.526886 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.526902 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.526943 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:20Z","lastTransitionTime":"2026-01-27T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.629847 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.629898 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.629910 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.629924 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.629935 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:20Z","lastTransitionTime":"2026-01-27T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.731722 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.731777 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.731786 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.731799 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.731809 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:20Z","lastTransitionTime":"2026-01-27T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.833929 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.833964 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.833975 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.833991 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.834002 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:20Z","lastTransitionTime":"2026-01-27T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.935624 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.935660 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.935670 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.935683 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:20 crc kubenswrapper[4914]: I0127 13:45:20.935693 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:20Z","lastTransitionTime":"2026-01-27T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.037408 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.037452 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.037463 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.037478 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.037488 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:21Z","lastTransitionTime":"2026-01-27T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.140760 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.140901 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.140917 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.140934 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.140948 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:21Z","lastTransitionTime":"2026-01-27T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.244788 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.245231 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.245242 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.245260 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.245271 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:21Z","lastTransitionTime":"2026-01-27T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.347495 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.347524 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.347595 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.347613 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.347621 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:21Z","lastTransitionTime":"2026-01-27T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.449351 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.449385 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.449395 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.449407 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.449415 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:21Z","lastTransitionTime":"2026-01-27T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.497082 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 15:59:36.602958511 +0000 UTC Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.551457 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.551507 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.551522 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.551577 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.551590 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:21Z","lastTransitionTime":"2026-01-27T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.654784 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.654869 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.654884 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.654908 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.654919 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:21Z","lastTransitionTime":"2026-01-27T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.757299 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.757361 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.757380 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.757424 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.757449 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:21Z","lastTransitionTime":"2026-01-27T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.860290 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.860360 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.860376 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.860393 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.860403 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:21Z","lastTransitionTime":"2026-01-27T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.962502 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.962557 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.962566 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.962580 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:21 crc kubenswrapper[4914]: I0127 13:45:21.962591 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:21Z","lastTransitionTime":"2026-01-27T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.064968 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.064997 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.065006 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.065018 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.065026 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:22Z","lastTransitionTime":"2026-01-27T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.167996 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.168049 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.168058 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.168075 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.168089 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:22Z","lastTransitionTime":"2026-01-27T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.269899 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.269937 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.269948 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.269963 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.269973 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:22Z","lastTransitionTime":"2026-01-27T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.293296 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.293458 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:22 crc kubenswrapper[4914]: E0127 13:45:22.293607 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.293646 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.293699 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:22 crc kubenswrapper[4914]: E0127 13:45:22.293790 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:22 crc kubenswrapper[4914]: E0127 13:45:22.293849 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:22 crc kubenswrapper[4914]: E0127 13:45:22.293894 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.305709 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.351730 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.370310 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbd46cb10b4609f1c672c23d55dbb211fb5f130fc861dbc837fa5be5b44a2f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"2026-01-27T13:44:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b6e57264-ff00-4537-aa42-4767d683c4f2\\\\n2026-01-27T13:44:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b6e57264-ff00-4537-aa42-4767d683c4f2 to /host/opt/cni/bin/\\\\n2026-01-27T13:44:31Z [verbose] multus-daemon started\\\\n2026-01-27T13:44:31Z [verbose] Readiness Indicator file check\\\\n2026-01-27T13:45:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.371762 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.371796 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.371807 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.371823 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.371861 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:22Z","lastTransitionTime":"2026-01-27T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.393903 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:45:03Z\\\",\\\"message\\\":\\\"t added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:45:03.181100 6635 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181644 6635 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181595 6635 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-gnhrd\\\\nI0127 13:45:03.181653 6635 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-5vprj in node crc\\\\nI0127 13:45:03.181659 6635 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-5vprj after 0 failed attempt(s)\\\\nI0127 13:45:03.181666 6635 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181523 6635 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:45:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.405782 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef4fd2b2-492b-4146-b888-3afe8d31175a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42bb4f493515c0d2dfc9e64dba4f32177efb75d52cb95be919d2d63b0c4948dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e572c5db026705caec1874b3946e3def2eb193b8aad8d2e199e20dfbbbbd4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e572c5db026705caec1874b3946e3def2eb193b8aad8d2e199e20dfbbbbd4a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.421408 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.436223 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.447352 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.460429 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.474588 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.474722 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.474767 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.474779 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.474797 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.474811 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:22Z","lastTransitionTime":"2026-01-27T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.484696 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.497250 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 19:42:47.674844296 +0000 UTC Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.497295 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.507325 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.519190 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.533315 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.544393 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.555021 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.565974 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae2dacff-90b1-4fc6-8d50-2114afdd4916\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b755de4898a164ac9577098c88092f6c7110e89fb07df414a9a1b7010598fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a165c9c9662f86e2a4031b517de6ca06d8fd0d5b946e8e777c4aff95601706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a33f463899dafa02c42203728101686cc7171af93777439ce8bbf7feb13fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.577881 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.577923 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.577934 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.577953 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.577965 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:22Z","lastTransitionTime":"2026-01-27T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.585612 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.680651 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.680699 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.680714 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.680739 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.680757 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:22Z","lastTransitionTime":"2026-01-27T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.783517 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.783552 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.783561 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.783577 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.783587 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:22Z","lastTransitionTime":"2026-01-27T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.886143 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.886200 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.886223 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.886251 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.886273 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:22Z","lastTransitionTime":"2026-01-27T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.989002 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.989057 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.989067 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.989082 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:22 crc kubenswrapper[4914]: I0127 13:45:22.989092 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:22Z","lastTransitionTime":"2026-01-27T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.091869 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.091927 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.091941 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.092003 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.092021 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:23Z","lastTransitionTime":"2026-01-27T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.195019 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.195070 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.195082 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.195098 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.195109 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:23Z","lastTransitionTime":"2026-01-27T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.297624 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.297659 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.297667 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.297677 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.297686 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:23Z","lastTransitionTime":"2026-01-27T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.400246 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.400288 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.400298 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.400314 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.400326 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:23Z","lastTransitionTime":"2026-01-27T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.497748 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 02:04:47.856640235 +0000 UTC Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.502564 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.502622 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.502637 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.502657 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.502671 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:23Z","lastTransitionTime":"2026-01-27T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.604551 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.604624 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.604637 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.604653 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.604663 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:23Z","lastTransitionTime":"2026-01-27T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.707306 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.707346 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.707360 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.707376 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.707390 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:23Z","lastTransitionTime":"2026-01-27T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.809914 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.809962 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.809971 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.809985 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.809996 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:23Z","lastTransitionTime":"2026-01-27T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.922922 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.922966 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.922977 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.922994 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:23 crc kubenswrapper[4914]: I0127 13:45:23.923004 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:23Z","lastTransitionTime":"2026-01-27T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.026899 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.026976 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.026988 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.027006 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.027016 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:24Z","lastTransitionTime":"2026-01-27T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.129067 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.129113 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.129124 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.129140 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.129152 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:24Z","lastTransitionTime":"2026-01-27T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.231774 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.231819 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.231848 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.231864 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.231876 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:24Z","lastTransitionTime":"2026-01-27T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.294260 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.294338 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:24 crc kubenswrapper[4914]: E0127 13:45:24.294405 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.294447 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:24 crc kubenswrapper[4914]: E0127 13:45:24.294581 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.294630 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:24 crc kubenswrapper[4914]: E0127 13:45:24.294782 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:24 crc kubenswrapper[4914]: E0127 13:45:24.294642 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.334372 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.334431 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.334511 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.334528 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.334539 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:24Z","lastTransitionTime":"2026-01-27T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.436532 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.436577 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.436585 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.436598 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.436608 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:24Z","lastTransitionTime":"2026-01-27T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.498742 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 00:43:33.288557978 +0000 UTC Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.539071 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.539122 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.539157 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.539176 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.539186 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:24Z","lastTransitionTime":"2026-01-27T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.641730 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.641768 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.641776 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.641794 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.641805 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:24Z","lastTransitionTime":"2026-01-27T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.744236 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.744282 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.744296 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.744311 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.744320 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:24Z","lastTransitionTime":"2026-01-27T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.847263 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.847312 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.847323 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.847340 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.847352 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:24Z","lastTransitionTime":"2026-01-27T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.949379 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.949459 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.949478 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.949502 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:24 crc kubenswrapper[4914]: I0127 13:45:24.949521 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:24Z","lastTransitionTime":"2026-01-27T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.052473 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.052515 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.052533 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.052551 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.052563 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:25Z","lastTransitionTime":"2026-01-27T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.155443 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.155513 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.155531 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.155552 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.155569 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:25Z","lastTransitionTime":"2026-01-27T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.257662 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.257714 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.257731 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.257752 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.257769 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:25Z","lastTransitionTime":"2026-01-27T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.359343 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.359379 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.359390 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.359403 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.359413 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:25Z","lastTransitionTime":"2026-01-27T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.461144 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.461190 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.461199 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.461215 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.461227 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:25Z","lastTransitionTime":"2026-01-27T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.499727 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 01:24:54.096807419 +0000 UTC Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.564297 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.564345 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.564353 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.564369 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.564378 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:25Z","lastTransitionTime":"2026-01-27T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.666108 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.666163 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.666174 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.666192 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.666204 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:25Z","lastTransitionTime":"2026-01-27T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.769379 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.769429 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.769440 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.769462 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.769486 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:25Z","lastTransitionTime":"2026-01-27T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.872395 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.872445 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.872484 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.872503 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.872513 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:25Z","lastTransitionTime":"2026-01-27T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.975219 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.975254 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.975263 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.975276 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:25 crc kubenswrapper[4914]: I0127 13:45:25.975287 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:25Z","lastTransitionTime":"2026-01-27T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.077774 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.077887 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.077924 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.077953 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.077977 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:26Z","lastTransitionTime":"2026-01-27T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.180933 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.180980 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.180990 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.181005 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.181014 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:26Z","lastTransitionTime":"2026-01-27T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.282893 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.282939 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.282951 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.282965 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.282974 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:26Z","lastTransitionTime":"2026-01-27T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.293459 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.293494 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.293507 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.293592 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:26 crc kubenswrapper[4914]: E0127 13:45:26.293593 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:26 crc kubenswrapper[4914]: E0127 13:45:26.293710 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:26 crc kubenswrapper[4914]: E0127 13:45:26.293746 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:26 crc kubenswrapper[4914]: E0127 13:45:26.293784 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.385727 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.385771 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.385782 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.385796 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.385807 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:26Z","lastTransitionTime":"2026-01-27T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.488456 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.488501 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.488518 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.488539 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.488555 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:26Z","lastTransitionTime":"2026-01-27T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.499901 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:32:43.965272644 +0000 UTC Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.590811 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.590883 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.590893 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.590907 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.590916 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:26Z","lastTransitionTime":"2026-01-27T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.693808 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.693866 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.693878 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.693894 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.693906 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:26Z","lastTransitionTime":"2026-01-27T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.796874 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.796933 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.796945 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.796962 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.796973 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:26Z","lastTransitionTime":"2026-01-27T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.899767 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.899808 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.899816 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.899850 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:26 crc kubenswrapper[4914]: I0127 13:45:26.899859 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:26Z","lastTransitionTime":"2026-01-27T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.002981 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.003028 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.003037 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.003051 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.003060 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:27Z","lastTransitionTime":"2026-01-27T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.006329 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.006389 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.006404 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.006423 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.006438 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:27Z","lastTransitionTime":"2026-01-27T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:27 crc kubenswrapper[4914]: E0127 13:45:27.020775 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.024399 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.024434 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.024445 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.024461 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.024473 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:27Z","lastTransitionTime":"2026-01-27T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:27 crc kubenswrapper[4914]: E0127 13:45:27.069897 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.073585 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.073608 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.073616 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.073629 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.073639 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:27Z","lastTransitionTime":"2026-01-27T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:27 crc kubenswrapper[4914]: E0127 13:45:27.087066 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.091063 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.091123 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.091133 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.091151 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.091162 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:27Z","lastTransitionTime":"2026-01-27T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:27 crc kubenswrapper[4914]: E0127 13:45:27.105374 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.109813 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.109863 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.109873 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.109890 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.109906 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:27Z","lastTransitionTime":"2026-01-27T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:27 crc kubenswrapper[4914]: E0127 13:45:27.121267 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:27 crc kubenswrapper[4914]: E0127 13:45:27.121426 4914 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.122920 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.122965 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.122976 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.122991 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.123001 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:27Z","lastTransitionTime":"2026-01-27T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.227470 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.227542 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.227559 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.227582 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.227597 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:27Z","lastTransitionTime":"2026-01-27T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.330574 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.330613 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.330621 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.330636 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.330646 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:27Z","lastTransitionTime":"2026-01-27T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.433702 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.433753 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.433763 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.433779 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.433791 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:27Z","lastTransitionTime":"2026-01-27T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.500823 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 10:10:11.181384831 +0000 UTC Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.536552 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.536597 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.536605 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.536618 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.536629 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:27Z","lastTransitionTime":"2026-01-27T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.638713 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.638772 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.638790 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.638811 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.638857 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:27Z","lastTransitionTime":"2026-01-27T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.741134 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.741174 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.741183 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.741199 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.741210 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:27Z","lastTransitionTime":"2026-01-27T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.844236 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.844281 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.844291 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.844305 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.844316 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:27Z","lastTransitionTime":"2026-01-27T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.946663 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.946703 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.946714 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.946729 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:27 crc kubenswrapper[4914]: I0127 13:45:27.946740 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:27Z","lastTransitionTime":"2026-01-27T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.049072 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.049134 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.049157 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.049179 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.049198 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:28Z","lastTransitionTime":"2026-01-27T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.151563 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.151601 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.151610 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.151626 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.151639 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:28Z","lastTransitionTime":"2026-01-27T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.254058 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.254103 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.254113 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.254127 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.254136 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:28Z","lastTransitionTime":"2026-01-27T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.258632 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.258803 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.258776272 +0000 UTC m=+150.571126397 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.258926 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.258968 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.259044 4914 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.259094 4914 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.259112 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.259095831 +0000 UTC m=+150.571445976 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.259168 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.259151313 +0000 UTC m=+150.571501428 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.293631 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.293792 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.293939 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.293978 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.294000 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.294055 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.294252 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.294349 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.299351 4914 scope.go:117] "RemoveContainer" containerID="f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.356825 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.356877 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.356888 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.356903 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.356915 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:28Z","lastTransitionTime":"2026-01-27T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.359511 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.359687 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.359709 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.359708 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.359725 4914 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.359775 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.359755081 +0000 UTC m=+150.672105166 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.360165 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.360221 4914 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.360241 4914 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:45:28 crc kubenswrapper[4914]: E0127 13:45:28.360298 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.360277225 +0000 UTC m=+150.672627350 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.459346 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.459395 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.459407 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.459425 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.459438 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:28Z","lastTransitionTime":"2026-01-27T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.501158 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:58:57.473638403 +0000 UTC Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.561021 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.561319 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.561335 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.561348 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.561356 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:28Z","lastTransitionTime":"2026-01-27T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.663531 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.663582 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.663597 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.663635 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.663650 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:28Z","lastTransitionTime":"2026-01-27T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.765713 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.765778 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.765793 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.765962 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.765991 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:28Z","lastTransitionTime":"2026-01-27T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.868030 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.868071 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.868082 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.868096 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.868105 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:28Z","lastTransitionTime":"2026-01-27T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.970443 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.970507 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.970520 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.970537 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:28 crc kubenswrapper[4914]: I0127 13:45:28.970550 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:28Z","lastTransitionTime":"2026-01-27T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.073522 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.073592 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.073610 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.073635 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.073654 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:29Z","lastTransitionTime":"2026-01-27T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.176222 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.176302 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.176313 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.176326 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.176335 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:29Z","lastTransitionTime":"2026-01-27T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.279067 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.279147 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.279172 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.279200 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.279223 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:29Z","lastTransitionTime":"2026-01-27T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.381673 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.381752 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.381762 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.381778 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.381788 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:29Z","lastTransitionTime":"2026-01-27T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.487320 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.487364 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.487372 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.487386 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.487396 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:29Z","lastTransitionTime":"2026-01-27T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.501736 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 23:22:38.334913307 +0000 UTC Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.589290 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.589322 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.589330 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.589342 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.589350 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:29Z","lastTransitionTime":"2026-01-27T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.692068 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.692124 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.692133 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.692148 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.692158 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:29Z","lastTransitionTime":"2026-01-27T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.794377 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.794434 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.794451 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.794469 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.794481 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:29Z","lastTransitionTime":"2026-01-27T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.896403 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.896433 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.896443 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.896455 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.896466 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:29Z","lastTransitionTime":"2026-01-27T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.999355 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.999400 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.999410 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.999427 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:29 crc kubenswrapper[4914]: I0127 13:45:29.999438 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:29Z","lastTransitionTime":"2026-01-27T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.128528 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.128577 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.128586 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.128602 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.128623 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:30Z","lastTransitionTime":"2026-01-27T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.230728 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.230762 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.230773 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.230785 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.230794 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:30Z","lastTransitionTime":"2026-01-27T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.293808 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.293851 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.293886 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.293861 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:30 crc kubenswrapper[4914]: E0127 13:45:30.294014 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:30 crc kubenswrapper[4914]: E0127 13:45:30.294083 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:30 crc kubenswrapper[4914]: E0127 13:45:30.294175 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:30 crc kubenswrapper[4914]: E0127 13:45:30.294236 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.333207 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.333234 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.333242 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.333254 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.333264 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:30Z","lastTransitionTime":"2026-01-27T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.435991 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.436048 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.436067 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.436088 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.436105 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:30Z","lastTransitionTime":"2026-01-27T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.502732 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:54:33.165650478 +0000 UTC Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.538665 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.538696 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.538706 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.538722 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.538737 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:30Z","lastTransitionTime":"2026-01-27T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.627798 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovnkube-controller/3.log" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.628557 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovnkube-controller/2.log" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.632170 4914 generic.go:334] "Generic (PLEG): container finished" podID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerID="a16d765b49acc107009e3c8ebfc08e72f9e2772b3f0b03936a26dd8ff4fa1cf5" exitCode=1 Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.632211 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerDied","Data":"a16d765b49acc107009e3c8ebfc08e72f9e2772b3f0b03936a26dd8ff4fa1cf5"} Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.632247 4914 scope.go:117] "RemoveContainer" containerID="f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.632994 4914 scope.go:117] "RemoveContainer" containerID="a16d765b49acc107009e3c8ebfc08e72f9e2772b3f0b03936a26dd8ff4fa1cf5" Jan 27 13:45:30 crc kubenswrapper[4914]: E0127 13:45:30.633167 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.643363 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.643438 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.643452 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.643468 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.643479 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:30Z","lastTransitionTime":"2026-01-27T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.650222 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.666793 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.680451 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.691247 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.703136 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.715242 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.726939 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.741127 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.745634 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.745672 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.745681 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.745696 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.745705 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:30Z","lastTransitionTime":"2026-01-27T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.755592 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae2dacff-90b1-4fc6-8d50-2114afdd4916\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b755de4898a164ac9577098c88092f6c7110e89fb07df414a9a1b7010598fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a165c9c9662f86e2a4031b517de6ca06d8fd0d5b946e8e777c4aff95601706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a33f463899dafa02c42203728101686cc7171af93777439ce8bbf7feb13fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.772794 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.785171 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.799816 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.812633 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.830767 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16d765b49acc107009e3c8ebfc08e72f9e2772b3f0b03936a26dd8ff4fa1cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:45:03Z\\\",\\\"message\\\":\\\"t added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:45:03.181100 6635 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181644 6635 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181595 6635 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-gnhrd\\\\nI0127 13:45:03.181653 6635 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-5vprj in node crc\\\\nI0127 13:45:03.181659 6635 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-5vprj after 0 failed attempt(s)\\\\nI0127 13:45:03.181666 6635 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181523 6635 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:45:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16d765b49acc107009e3c8ebfc08e72f9e2772b3f0b03936a26dd8ff4fa1cf5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:45:30Z\\\",\\\"message\\\":\\\"emon-22nld]\\\\nI0127 13:45:30.548287 7056 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0127 13:45:30.548280 7056 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 13:45:30.548308 7056 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-22nld before timer (time: 2026-01-27 13:45:31.8410105 +0000 UTC m=+1.890686028): skip\\\\nI0127 13:45:30.548323 7056 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 13:45:30.548326 7056 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 51.481µs)\\\\nI0127 13:45:30.548344 7056 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 13:45:30.548365 7056 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 13:45:30.548400 7056 factory.go:656] Stopping watch factory\\\\nI0127 13:45:30.548417 7056 ovnkube.go:599] Stopped ovnkube\\\\nI0127 13:45:30.548431 7056 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 13:45:30.548446 7056 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 13:45:30.548447 7056 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 13:45:30.548531 7056 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.841632 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef4fd2b2-492b-4146-b888-3afe8d31175a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42bb4f493515c0d2dfc9e64dba4f32177efb75d52cb95be919d2d63b0c4948dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e572c5db026705caec1874b3946e3def2eb193b8aad8d2e199e20dfbbbbd4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e572c5db026705caec1874b3946e3def2eb193b8aad8d2e199e20dfbbbbd4a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.848411 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.848456 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.848475 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.848495 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.848507 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:30Z","lastTransitionTime":"2026-01-27T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.854592 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.864863 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.881730 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.897075 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbd46cb10b4609f1c672c23d55dbb211fb5f130fc861dbc837fa5be5b44a2f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"2026-01-27T13:44:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b6e57264-ff00-4537-aa42-4767d683c4f2\\\\n2026-01-27T13:44:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b6e57264-ff00-4537-aa42-4767d683c4f2 to /host/opt/cni/bin/\\\\n2026-01-27T13:44:31Z [verbose] multus-daemon started\\\\n2026-01-27T13:44:31Z [verbose] Readiness Indicator file check\\\\n2026-01-27T13:45:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:30Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.951010 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.951166 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.951179 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.951205 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:30 crc kubenswrapper[4914]: I0127 13:45:30.951217 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:30Z","lastTransitionTime":"2026-01-27T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.054290 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.054339 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.054350 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.054369 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.054384 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:31Z","lastTransitionTime":"2026-01-27T13:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.157147 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.157193 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.157205 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.157221 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.157233 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:31Z","lastTransitionTime":"2026-01-27T13:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.259730 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.259791 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.259799 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.259813 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.259822 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:31Z","lastTransitionTime":"2026-01-27T13:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.362367 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.362414 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.362423 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.362437 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.362449 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:31Z","lastTransitionTime":"2026-01-27T13:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.464957 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.465007 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.465016 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.465034 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.465043 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:31Z","lastTransitionTime":"2026-01-27T13:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.503425 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 10:16:19.847750199 +0000 UTC Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.568203 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.568239 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.568251 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.568266 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.568278 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:31Z","lastTransitionTime":"2026-01-27T13:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.637715 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovnkube-controller/3.log" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.670332 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.670383 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.670397 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.670413 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.670425 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:31Z","lastTransitionTime":"2026-01-27T13:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.773137 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.773182 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.773192 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.773210 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.773222 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:31Z","lastTransitionTime":"2026-01-27T13:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.875204 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.875241 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.875249 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.875262 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.875272 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:31Z","lastTransitionTime":"2026-01-27T13:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.977699 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.977783 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.977811 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.977871 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:31 crc kubenswrapper[4914]: I0127 13:45:31.977892 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:31Z","lastTransitionTime":"2026-01-27T13:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.080856 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.080889 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.080899 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.080914 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.080924 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:32Z","lastTransitionTime":"2026-01-27T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.182878 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.182922 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.182936 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.182957 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.182973 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:32Z","lastTransitionTime":"2026-01-27T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.285486 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.285535 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.285546 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.285564 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.285576 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:32Z","lastTransitionTime":"2026-01-27T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.294171 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.294213 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.294294 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:32 crc kubenswrapper[4914]: E0127 13:45:32.294440 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.294539 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:32 crc kubenswrapper[4914]: E0127 13:45:32.294588 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:32 crc kubenswrapper[4914]: E0127 13:45:32.294672 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:32 crc kubenswrapper[4914]: E0127 13:45:32.294715 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.313580 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adce282-c454-4aa2-9cbe-356c7d371f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16d765b49acc107009e3c8ebfc08e72f9e2772b3f0b03936a26dd8ff4fa1cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4cac02b8e731ea126a70f19a5d705644436b378a1c27fab7bdc8edb275b10d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:45:03Z\\\",\\\"message\\\":\\\"t added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:03Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:45:03.181100 6635 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181644 6635 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181595 6635 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-gnhrd\\\\nI0127 13:45:03.181653 6635 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-5vprj in node crc\\\\nI0127 13:45:03.181659 6635 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-5vprj after 0 failed attempt(s)\\\\nI0127 13:45:03.181666 6635 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-5vprj\\\\nI0127 13:45:03.181523 6635 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:45:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16d765b49acc107009e3c8ebfc08e72f9e2772b3f0b03936a26dd8ff4fa1cf5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:45:30Z\\\",\\\"message\\\":\\\"emon-22nld]\\\\nI0127 13:45:30.548287 7056 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0127 13:45:30.548280 7056 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 13:45:30.548308 7056 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-22nld before timer (time: 2026-01-27 13:45:31.8410105 +0000 UTC m=+1.890686028): skip\\\\nI0127 13:45:30.548323 7056 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 13:45:30.548326 7056 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 51.481µs)\\\\nI0127 13:45:30.548344 7056 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 13:45:30.548365 7056 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 13:45:30.548400 7056 factory.go:656] Stopping watch factory\\\\nI0127 13:45:30.548417 7056 ovnkube.go:599] Stopped ovnkube\\\\nI0127 13:45:30.548431 7056 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 13:45:30.548446 7056 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 13:45:30.548447 7056 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 13:45:30.548531 7056 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpnbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7m5xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.327548 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef4fd2b2-492b-4146-b888-3afe8d31175a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42bb4f493515c0d2dfc9e64dba4f32177efb75d52cb95be919d2d63b0c4948dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e572c5db026705caec1874b3946e3def2eb193b8aad8d2e199e20dfbbbbd4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e572c5db026705caec1874b3946e3def2eb193b8aad8d2e199e20dfbbbbd4a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.339444 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.350097 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnhrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c183ba27-856b-4b3e-a8e4-3a1ef30a891a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db42948f19e15544345e7ea0bbe9ff66030152f11e1fc05403495bfeb6e89c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9ll5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnhrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.374247 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0669c8c6-fa51-4aab-bf05-50f96cd91035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c58e4a2a3661b087d80fffb395c6fba3f58458930cd637e800322591e27b190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91f0a284b29f5e6b3e240f81e5b7d3e8e4a24feadf3d8e587e6666c3f68d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bca69082dda371cde54b7aa88fc9565687b22985bb74d122b9f8869ff1e094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b243aca7b609079a0a2c205bc8a65d7affb0be00666747ddf21343c3cf42ff96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2fbc1dc55061e6e81a54e7dcb53d87cd26d57ef356c8e741ede6e04eb45a9c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c12aeddf402631c73a1fafc24ecd411eb40d23ee9d2441543e3ca6333c05ed7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e4d8b55c95a3ad42458fffb991d0ed6f13a09030151db9d9b311a39e11e5cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rbqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.387255 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.387296 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.387311 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.387328 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.387342 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:32Z","lastTransitionTime":"2026-01-27T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.389068 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b628" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbd46cb10b4609f1c672c23d55dbb211fb5f130fc861dbc837fa5be5b44a2f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:45:16Z\\\",\\\"message\\\":\\\"2026-01-27T13:44:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b6e57264-ff00-4537-aa42-4767d683c4f2\\\\n2026-01-27T13:44:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b6e57264-ff00-4537-aa42-4767d683c4f2 to /host/opt/cni/bin/\\\\n2026-01-27T13:44:31Z [verbose] multus-daemon started\\\\n2026-01-27T13:44:31Z [verbose] Readiness Indicator file check\\\\n2026-01-27T13:45:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b628\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.400937 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b979f7f-2cfd-417e-aa1f-6108ebb77e17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 13:44:18.231000 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 13:44:18.231749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1500627801/tls.crt::/tmp/serving-cert-1500627801/tls.key\\\\\\\"\\\\nI0127 13:44:23.754378 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 13:44:23.756852 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 13:44:23.756873 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 13:44:23.756898 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:44:23.756904 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:44:23.767674 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:44:23.767711 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:44:23.767724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:44:23.767729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:44:23.767733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:44:23.767736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:44:23.768056 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 13:44:23.771752 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.411296 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee0361c553490987601417f617f8c59941f5f35a6eb29b2f3ec638763645ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.421155 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.429630 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5vprj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afff8b35-f3f4-4f13-a19d-cb318f982fbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f046c7e9cf55bf69bfc1bf4d728f4d6095db65f17e9f89ef462679719ac2a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rspcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5vprj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.445340 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.454871 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab2c7833-d799-431e-a4dc-d6790e7c732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501a3056cfbdb80f4db4512bc5b5c5472171fa6bb8e991669fefc770f80df234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89adccd2426d30582859887febfed5f2a3cd3bd0bee3851203db822661d59a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb7m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lhm6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.468779 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-22nld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72d4d49f-291e-448e-81eb-0895324cd4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d74rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-22nld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.477758 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf2dcff-9caa-45ba-98a8-0a00861bd11a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66acfbaa3279392d189d1630527c24fd4220b7f9ac6064392ba2a9bd75a2f456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pz8dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qhdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.486084 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae2dacff-90b1-4fc6-8d50-2114afdd4916\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b755de4898a164ac9577098c88092f6c7110e89fb07df414a9a1b7010598fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a165c9c9662f86e2a4031b517de6ca06d8fd0d5b946e8e777c4aff95601706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a33f463899dafa02c42203728101686cc7171af93777439ce8bbf7feb13fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ab1705422837358d4286d43439de3652f3db3edc569c6707076d1fc4937d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.492076 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.492153 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.492171 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.492194 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.492212 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:32Z","lastTransitionTime":"2026-01-27T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.504605 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 07:55:47.479313581 +0000 UTC Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.505744 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de6937df-e184-4eed-8c98-52bcb729b11b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274e8e33d4edc2b43cef0dd56668a2ed2567c6b2ad730b92725bdd25f69775a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b991a145d5cf0ea893e7364ba942150dc5c2b5618c4ce5d246caaaa769c46bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c82609231c1cdd36346e9206a95bfaa1ff85c16ddd19698697a285810ac36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf81548f99a5403de0b8213a780bfecc85cd07aaf8463a4b842a80dc21abf778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eca382660e5c57e636d2edff5af272930be7eed6058c9aeebbd14ab323b4d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9c7ba26cd51a485ff10d4dfd2acf520fe20eedecede6d3ffcdb760f47f5a17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbf9690da4b56e464a5e86ccda0b27c445d45e178b40c2b316552d8a25b749de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b0bc667f62c8799c59ba4787b91f7e2c91cdb0b0815075a7d63ecfdc5a0881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.517451 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b65030f2-ce71-4fc2-befb-2a7cee63d8fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88d0368914f7e9f1d5fd738f71e4a817cb2751ba0eaeb8106d4efc24f0bf5ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47d6873d3a3cae55d70f9a6be116afba64af2864098323360b94243d1ea375c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0a3007d111ef0b11385cd4fe496dc391e1c17ee19cbe1dfcac36b2feb741f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:44:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.538443 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5054f1e9c6d266928e5a2e3ab682680f088a90b4fccfe23f6e32bcc93ee074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b73faa09156da17ed4ba742e373475e8a9a2d56dc7bf80835e109ab92e56ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.553331 4914 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02baaebfa9dc12391eb40a6520d4a2b28dae6b9585b16081522db4cd8c72a8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.594962 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.595030 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.595040 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.595061 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.595077 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:32Z","lastTransitionTime":"2026-01-27T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.697540 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.697583 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.697595 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.697610 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.697622 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:32Z","lastTransitionTime":"2026-01-27T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.804986 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.805035 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.805053 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.805076 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.805094 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:32Z","lastTransitionTime":"2026-01-27T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.907283 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.907329 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.907341 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.907358 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:32 crc kubenswrapper[4914]: I0127 13:45:32.907372 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:32Z","lastTransitionTime":"2026-01-27T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.014348 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.014388 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.014399 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.014414 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.014424 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:33Z","lastTransitionTime":"2026-01-27T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.117188 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.117238 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.117249 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.117266 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.117277 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:33Z","lastTransitionTime":"2026-01-27T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.220259 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.220297 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.220309 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.220323 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.220346 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:33Z","lastTransitionTime":"2026-01-27T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.322825 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.322883 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.322892 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.322909 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.322922 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:33Z","lastTransitionTime":"2026-01-27T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.424742 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.424820 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.424871 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.424891 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.424922 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:33Z","lastTransitionTime":"2026-01-27T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.505860 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 20:20:32.237251742 +0000 UTC Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.527607 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.527650 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.527662 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.527678 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.527690 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:33Z","lastTransitionTime":"2026-01-27T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.631041 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.631090 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.631099 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.631115 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.631127 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:33Z","lastTransitionTime":"2026-01-27T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.733804 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.733886 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.733896 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.733910 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.733921 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:33Z","lastTransitionTime":"2026-01-27T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.836310 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.836349 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.836360 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.836378 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.836389 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:33Z","lastTransitionTime":"2026-01-27T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.938336 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.938372 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.938382 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.938398 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:33 crc kubenswrapper[4914]: I0127 13:45:33.938409 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:33Z","lastTransitionTime":"2026-01-27T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.041371 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.041402 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.041410 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.041423 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.041431 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:34Z","lastTransitionTime":"2026-01-27T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.144222 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.144267 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.144280 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.144296 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.144310 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:34Z","lastTransitionTime":"2026-01-27T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.246743 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.246793 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.246804 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.246825 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.246859 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:34Z","lastTransitionTime":"2026-01-27T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.293723 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.293758 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.293910 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:34 crc kubenswrapper[4914]: E0127 13:45:34.293951 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.294018 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:34 crc kubenswrapper[4914]: E0127 13:45:34.294016 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:34 crc kubenswrapper[4914]: E0127 13:45:34.294070 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:34 crc kubenswrapper[4914]: E0127 13:45:34.294118 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.349700 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.349772 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.349790 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.349814 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.349854 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:34Z","lastTransitionTime":"2026-01-27T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.453377 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.453430 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.453441 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.453462 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.453480 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:34Z","lastTransitionTime":"2026-01-27T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.506794 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:22:18.142953857 +0000 UTC Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.556600 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.556644 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.556653 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.556670 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.556683 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:34Z","lastTransitionTime":"2026-01-27T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.659335 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.659376 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.659387 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.659405 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.659416 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:34Z","lastTransitionTime":"2026-01-27T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.762112 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.762145 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.762155 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.762169 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.762181 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:34Z","lastTransitionTime":"2026-01-27T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.865240 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.865287 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.865298 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.865313 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.865323 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:34Z","lastTransitionTime":"2026-01-27T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.967348 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.967423 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.967445 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.967468 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:34 crc kubenswrapper[4914]: I0127 13:45:34.967485 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:34Z","lastTransitionTime":"2026-01-27T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.070032 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.070084 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.070098 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.070115 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.070126 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:35Z","lastTransitionTime":"2026-01-27T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.173823 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.173922 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.173939 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.173965 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.173982 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:35Z","lastTransitionTime":"2026-01-27T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.276805 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.276902 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.276920 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.276938 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.276949 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:35Z","lastTransitionTime":"2026-01-27T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.379282 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.379331 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.379342 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.379358 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.379371 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:35Z","lastTransitionTime":"2026-01-27T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.482080 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.482130 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.482142 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.482159 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.482170 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:35Z","lastTransitionTime":"2026-01-27T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.507964 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:11:05.916296017 +0000 UTC Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.584985 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.585084 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.585103 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.585126 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.585142 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:35Z","lastTransitionTime":"2026-01-27T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.687115 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.687146 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.687154 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.687185 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.687194 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:35Z","lastTransitionTime":"2026-01-27T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.789189 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.789247 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.789259 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.789275 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.789288 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:35Z","lastTransitionTime":"2026-01-27T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.893265 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.893390 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.893468 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.893507 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.893532 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:35Z","lastTransitionTime":"2026-01-27T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.996571 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.996624 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.996637 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.996657 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:35 crc kubenswrapper[4914]: I0127 13:45:35.996669 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:35Z","lastTransitionTime":"2026-01-27T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.101723 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.101767 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.101779 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.101794 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.101805 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:36Z","lastTransitionTime":"2026-01-27T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.204121 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.204157 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.204167 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.204183 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.204193 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:36Z","lastTransitionTime":"2026-01-27T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.293667 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.293798 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.293976 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:36 crc kubenswrapper[4914]: E0127 13:45:36.293966 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.293981 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:36 crc kubenswrapper[4914]: E0127 13:45:36.294092 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:36 crc kubenswrapper[4914]: E0127 13:45:36.294169 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:36 crc kubenswrapper[4914]: E0127 13:45:36.294429 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.313030 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.313086 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.313100 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.313119 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.313135 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:36Z","lastTransitionTime":"2026-01-27T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.416995 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.417050 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.417062 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.417082 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.417095 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:36Z","lastTransitionTime":"2026-01-27T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.508401 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 02:45:17.063153211 +0000 UTC Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.519349 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.519693 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.519707 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.519724 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.519735 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:36Z","lastTransitionTime":"2026-01-27T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.622868 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.622914 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.622923 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.622936 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.622945 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:36Z","lastTransitionTime":"2026-01-27T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.725052 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.725098 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.725107 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.725121 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.725132 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:36Z","lastTransitionTime":"2026-01-27T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.829307 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.829371 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.829391 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.829415 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.829432 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:36Z","lastTransitionTime":"2026-01-27T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.932831 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.932947 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.932958 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.932991 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:36 crc kubenswrapper[4914]: I0127 13:45:36.933005 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:36Z","lastTransitionTime":"2026-01-27T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.035317 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.035358 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.035368 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.035383 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.035392 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:37Z","lastTransitionTime":"2026-01-27T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.138231 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.138283 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.138299 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.138322 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.138337 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:37Z","lastTransitionTime":"2026-01-27T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.171219 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.171273 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.171287 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.171305 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.171323 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:37Z","lastTransitionTime":"2026-01-27T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:37 crc kubenswrapper[4914]: E0127 13:45:37.187545 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.192728 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.192788 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.192801 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.192814 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.192824 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:37Z","lastTransitionTime":"2026-01-27T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:37 crc kubenswrapper[4914]: E0127 13:45:37.214015 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.218427 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.218468 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.218477 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.218491 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.218500 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:37Z","lastTransitionTime":"2026-01-27T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:37 crc kubenswrapper[4914]: E0127 13:45:37.238198 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.243088 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.243149 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.243158 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.243174 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.243183 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:37Z","lastTransitionTime":"2026-01-27T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:37 crc kubenswrapper[4914]: E0127 13:45:37.257494 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.260935 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.260982 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.261014 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.261030 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.261043 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:37Z","lastTransitionTime":"2026-01-27T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:37 crc kubenswrapper[4914]: E0127 13:45:37.274560 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:45:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"096812c4-5121-428c-9502-97f27967ca56\\\",\\\"systemUUID\\\":\\\"b46996ba-6bdd-421e-afd7-e88de2c05d29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:45:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:45:37 crc kubenswrapper[4914]: E0127 13:45:37.274726 4914 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.276256 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.276295 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.276308 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.276328 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.276341 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:37Z","lastTransitionTime":"2026-01-27T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.378637 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.378682 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.378694 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.378709 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.378722 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:37Z","lastTransitionTime":"2026-01-27T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.481727 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.481785 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.481797 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.481813 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.481825 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:37Z","lastTransitionTime":"2026-01-27T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.509225 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:41:34.575792916 +0000 UTC Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.584604 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.584638 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.584648 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.584663 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.584675 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:37Z","lastTransitionTime":"2026-01-27T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.687586 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.687628 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.687642 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.687661 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.687677 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:37Z","lastTransitionTime":"2026-01-27T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.789936 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.789975 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.789984 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.789998 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.790011 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:37Z","lastTransitionTime":"2026-01-27T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.893348 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.893419 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.893437 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.893460 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.893477 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:37Z","lastTransitionTime":"2026-01-27T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.996277 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.996316 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.996327 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.996341 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:37 crc kubenswrapper[4914]: I0127 13:45:37.996352 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:37Z","lastTransitionTime":"2026-01-27T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.098612 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.098681 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.098692 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.098704 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.098713 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:38Z","lastTransitionTime":"2026-01-27T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.200739 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.200807 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.200821 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.200867 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.200881 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:38Z","lastTransitionTime":"2026-01-27T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.294101 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.294175 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.294186 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.294391 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:38 crc kubenswrapper[4914]: E0127 13:45:38.294570 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:38 crc kubenswrapper[4914]: E0127 13:45:38.294691 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:38 crc kubenswrapper[4914]: E0127 13:45:38.294862 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:38 crc kubenswrapper[4914]: E0127 13:45:38.294941 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.303531 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.303605 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.303619 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.303636 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.303647 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:38Z","lastTransitionTime":"2026-01-27T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.405767 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.405808 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.405819 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.405839 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.405850 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:38Z","lastTransitionTime":"2026-01-27T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.508817 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.508883 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.508894 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.508908 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.508917 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:38Z","lastTransitionTime":"2026-01-27T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.509882 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 05:41:31.691245265 +0000 UTC Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.612108 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.612151 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.612166 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.612180 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.612191 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:38Z","lastTransitionTime":"2026-01-27T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.714424 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.714460 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.714472 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.714487 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.714498 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:38Z","lastTransitionTime":"2026-01-27T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.816974 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.817000 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.817008 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.817020 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.817028 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:38Z","lastTransitionTime":"2026-01-27T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.918917 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.918945 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.918952 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.918964 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:38 crc kubenswrapper[4914]: I0127 13:45:38.918972 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:38Z","lastTransitionTime":"2026-01-27T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.022008 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.022085 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.022107 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.022130 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.022150 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:39Z","lastTransitionTime":"2026-01-27T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.124751 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.124785 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.124794 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.124808 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.124817 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:39Z","lastTransitionTime":"2026-01-27T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.227124 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.227173 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.227192 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.227210 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.227221 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:39Z","lastTransitionTime":"2026-01-27T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.330749 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.330797 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.330808 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.330824 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.330834 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:39Z","lastTransitionTime":"2026-01-27T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.433981 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.434043 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.434054 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.434071 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.434083 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:39Z","lastTransitionTime":"2026-01-27T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.510256 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:48:56.635666077 +0000 UTC Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.537148 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.537218 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.537232 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.537248 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.537261 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:39Z","lastTransitionTime":"2026-01-27T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.640203 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.640270 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.640281 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.640319 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.640341 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:39Z","lastTransitionTime":"2026-01-27T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.742875 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.742965 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.742983 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.743006 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.743053 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:39Z","lastTransitionTime":"2026-01-27T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.846359 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.846454 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.846471 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.846528 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.846547 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:39Z","lastTransitionTime":"2026-01-27T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.950223 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.950299 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.950317 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.950334 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:39 crc kubenswrapper[4914]: I0127 13:45:39.950345 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:39Z","lastTransitionTime":"2026-01-27T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.053195 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.053265 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.053275 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.053289 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.053297 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:40Z","lastTransitionTime":"2026-01-27T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.156483 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.156570 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.156585 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.156611 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.156621 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:40Z","lastTransitionTime":"2026-01-27T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.259411 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.259450 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.259460 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.259473 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.259484 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:40Z","lastTransitionTime":"2026-01-27T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.294344 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.294574 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:40 crc kubenswrapper[4914]: E0127 13:45:40.294750 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.294799 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.294768 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:40 crc kubenswrapper[4914]: E0127 13:45:40.295009 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:40 crc kubenswrapper[4914]: E0127 13:45:40.295131 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:40 crc kubenswrapper[4914]: E0127 13:45:40.295232 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.363143 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.363177 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.363187 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.363204 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.363236 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:40Z","lastTransitionTime":"2026-01-27T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.466823 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.466894 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.466905 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.466921 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.466933 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:40Z","lastTransitionTime":"2026-01-27T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.511200 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:58:30.597092484 +0000 UTC Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.571446 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.571502 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.571512 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.571528 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.571541 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:40Z","lastTransitionTime":"2026-01-27T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.674629 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.674666 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.674674 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.674688 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.674697 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:40Z","lastTransitionTime":"2026-01-27T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.782047 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.782083 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.782101 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.782120 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.782131 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:40Z","lastTransitionTime":"2026-01-27T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.884943 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.884986 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.884999 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.885017 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.885029 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:40Z","lastTransitionTime":"2026-01-27T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.987595 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.987649 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.987667 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.987688 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:40 crc kubenswrapper[4914]: I0127 13:45:40.987703 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:40Z","lastTransitionTime":"2026-01-27T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.089921 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.089987 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.089999 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.090023 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.090038 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:41Z","lastTransitionTime":"2026-01-27T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.192876 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.192922 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.192934 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.192953 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.192967 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:41Z","lastTransitionTime":"2026-01-27T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.295108 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.295195 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.295228 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.295258 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.295284 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:41Z","lastTransitionTime":"2026-01-27T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.398474 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.398520 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.398533 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.398551 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.398564 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:41Z","lastTransitionTime":"2026-01-27T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.503051 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.503098 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.503107 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.503122 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.503133 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:41Z","lastTransitionTime":"2026-01-27T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.511919 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:08:26.948504674 +0000 UTC Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.606104 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.606154 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.606166 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.606183 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.606195 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:41Z","lastTransitionTime":"2026-01-27T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.714472 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.714531 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.714552 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.714575 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.714589 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:41Z","lastTransitionTime":"2026-01-27T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.817886 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.817931 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.817948 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.817964 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.817974 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:41Z","lastTransitionTime":"2026-01-27T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.920692 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.920776 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.920796 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.920815 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:41 crc kubenswrapper[4914]: I0127 13:45:41.920845 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:41Z","lastTransitionTime":"2026-01-27T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.024076 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.024166 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.024233 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.024258 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.024273 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:42Z","lastTransitionTime":"2026-01-27T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.127742 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.127793 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.127805 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.127844 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.127859 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:42Z","lastTransitionTime":"2026-01-27T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.238364 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.238426 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.238436 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.238455 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.238468 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:42Z","lastTransitionTime":"2026-01-27T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.294274 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.294320 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.294425 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:42 crc kubenswrapper[4914]: E0127 13:45:42.294478 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:42 crc kubenswrapper[4914]: E0127 13:45:42.294589 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:42 crc kubenswrapper[4914]: E0127 13:45:42.294663 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.294753 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:42 crc kubenswrapper[4914]: E0127 13:45:42.294804 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.341613 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.341692 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.341709 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.341735 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.341749 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:42Z","lastTransitionTime":"2026-01-27T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.344599 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.344584431 podStartE2EDuration="1m18.344584431s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:45:42.344470017 +0000 UTC m=+100.656820112" watchObservedRunningTime="2026-01-27 13:45:42.344584431 +0000 UTC m=+100.656934526" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.408532 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-5vprj" podStartSLOduration=79.408502195 podStartE2EDuration="1m19.408502195s" podCreationTimestamp="2026-01-27 13:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:45:42.389522335 +0000 UTC m=+100.701872440" watchObservedRunningTime="2026-01-27 13:45:42.408502195 +0000 UTC m=+100.720852280" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.435958 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lhm6l" podStartSLOduration=78.435935851 podStartE2EDuration="1m18.435935851s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:45:42.422584658 +0000 UTC m=+100.734934743" watchObservedRunningTime="2026-01-27 13:45:42.435935851 +0000 UTC m=+100.748285936" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.444620 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.444661 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.444674 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.444689 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.444701 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:42Z","lastTransitionTime":"2026-01-27T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.463928 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=44.463908541 podStartE2EDuration="44.463908541s" podCreationTimestamp="2026-01-27 13:44:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:45:42.46386524 +0000 UTC m=+100.776215335" watchObservedRunningTime="2026-01-27 13:45:42.463908541 +0000 UTC m=+100.776258626" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.464139 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podStartSLOduration=79.464132997 podStartE2EDuration="1m19.464132997s" podCreationTimestamp="2026-01-27 13:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:45:42.449239952 +0000 UTC m=+100.761590037" watchObservedRunningTime="2026-01-27 13:45:42.464132997 +0000 UTC m=+100.776483082" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.488053 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=77.488035065 podStartE2EDuration="1m17.488035065s" podCreationTimestamp="2026-01-27 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:45:42.48714706 +0000 UTC m=+100.799497145" watchObservedRunningTime="2026-01-27 13:45:42.488035065 +0000 UTC m=+100.800385140" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.512530 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 17:27:07.310121996 +0000 UTC Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.522136 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=79.522098956 podStartE2EDuration="1m19.522098956s" podCreationTimestamp="2026-01-27 13:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:45:42.502606311 +0000 UTC m=+100.814956396" watchObservedRunningTime="2026-01-27 13:45:42.522098956 +0000 UTC m=+100.834449041" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.547438 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.547487 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.547520 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.547539 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.547551 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:42Z","lastTransitionTime":"2026-01-27T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.574271 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=32.574252121 podStartE2EDuration="32.574252121s" podCreationTimestamp="2026-01-27 13:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:45:42.573908101 +0000 UTC m=+100.886258186" watchObservedRunningTime="2026-01-27 13:45:42.574252121 +0000 UTC m=+100.886602206" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.599411 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-gnhrd" podStartSLOduration=79.599390612 podStartE2EDuration="1m19.599390612s" podCreationTimestamp="2026-01-27 13:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:45:42.598425506 +0000 UTC m=+100.910775591" watchObservedRunningTime="2026-01-27 13:45:42.599390612 +0000 UTC m=+100.911740697" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.615766 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-554jw" podStartSLOduration=78.615744539 podStartE2EDuration="1m18.615744539s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:45:42.615462431 +0000 UTC m=+100.927812516" watchObservedRunningTime="2026-01-27 13:45:42.615744539 +0000 UTC m=+100.928094624" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.633982 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-6b628" podStartSLOduration=79.633961898 podStartE2EDuration="1m19.633961898s" podCreationTimestamp="2026-01-27 13:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:45:42.633075622 +0000 UTC m=+100.945425717" watchObservedRunningTime="2026-01-27 13:45:42.633961898 +0000 UTC m=+100.946311983" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.649585 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.649628 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.649636 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.649653 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.649666 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:42Z","lastTransitionTime":"2026-01-27T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.752889 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.752958 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.752976 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.752996 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.753010 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:42Z","lastTransitionTime":"2026-01-27T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.856190 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.856226 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.856235 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.856248 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.856259 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:42Z","lastTransitionTime":"2026-01-27T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.958112 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.958151 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.958161 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.958176 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:42 crc kubenswrapper[4914]: I0127 13:45:42.958186 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:42Z","lastTransitionTime":"2026-01-27T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.060584 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.060642 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.060650 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.060664 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.060673 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:43Z","lastTransitionTime":"2026-01-27T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.162398 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.162445 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.162457 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.162476 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.162489 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:43Z","lastTransitionTime":"2026-01-27T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.227684 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs\") pod \"network-metrics-daemon-22nld\" (UID: \"72d4d49f-291e-448e-81eb-0895324cd4ae\") " pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:43 crc kubenswrapper[4914]: E0127 13:45:43.227994 4914 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:45:43 crc kubenswrapper[4914]: E0127 13:45:43.228065 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs podName:72d4d49f-291e-448e-81eb-0895324cd4ae nodeName:}" failed. No retries permitted until 2026-01-27 13:46:47.228047649 +0000 UTC m=+165.540397734 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs") pod "network-metrics-daemon-22nld" (UID: "72d4d49f-291e-448e-81eb-0895324cd4ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.266089 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.266164 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.266182 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.266205 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.266222 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:43Z","lastTransitionTime":"2026-01-27T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.368815 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.368887 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.368899 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.368917 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.368930 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:43Z","lastTransitionTime":"2026-01-27T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.471317 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.471354 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.471364 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.471379 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.471388 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:43Z","lastTransitionTime":"2026-01-27T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.513490 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:03:53.165970348 +0000 UTC Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.573543 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.573606 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.573620 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.573693 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.573727 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:43Z","lastTransitionTime":"2026-01-27T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.675714 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.675750 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.675761 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.675777 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.675788 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:43Z","lastTransitionTime":"2026-01-27T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.777575 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.777634 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.777650 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.777668 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:43 crc kubenswrapper[4914]: I0127 13:45:43.777680 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:43Z","lastTransitionTime":"2026-01-27T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.264690 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.264739 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.264752 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.264771 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.264784 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:44Z","lastTransitionTime":"2026-01-27T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.294152 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.294164 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:44 crc kubenswrapper[4914]: E0127 13:45:44.294307 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.294164 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.294429 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:44 crc kubenswrapper[4914]: E0127 13:45:44.294944 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.294964 4914 scope.go:117] "RemoveContainer" containerID="a16d765b49acc107009e3c8ebfc08e72f9e2772b3f0b03936a26dd8ff4fa1cf5" Jan 27 13:45:44 crc kubenswrapper[4914]: E0127 13:45:44.295025 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:44 crc kubenswrapper[4914]: E0127 13:45:44.295088 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:44 crc kubenswrapper[4914]: E0127 13:45:44.295221 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.367514 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.367575 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.367586 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.367600 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.367614 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:44Z","lastTransitionTime":"2026-01-27T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.469911 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.469956 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.469971 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.469987 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.469997 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:44Z","lastTransitionTime":"2026-01-27T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.514060 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:30:22.242649711 +0000 UTC Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.572610 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.572664 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.572676 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.572692 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.572712 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:44Z","lastTransitionTime":"2026-01-27T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.674819 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.674873 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.674884 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.674899 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.674911 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:44Z","lastTransitionTime":"2026-01-27T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.776728 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.776767 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.776778 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.776793 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.776805 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:44Z","lastTransitionTime":"2026-01-27T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.883121 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.883172 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.883192 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.883210 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.883221 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:44Z","lastTransitionTime":"2026-01-27T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.985426 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.985457 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.985465 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.985477 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:44 crc kubenswrapper[4914]: I0127 13:45:44.985486 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:44Z","lastTransitionTime":"2026-01-27T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.088100 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.088148 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.088162 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.088181 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.088196 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:45Z","lastTransitionTime":"2026-01-27T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.190982 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.191054 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.191074 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.191098 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.191115 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:45Z","lastTransitionTime":"2026-01-27T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.293386 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.293434 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.293446 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.293462 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.293479 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:45Z","lastTransitionTime":"2026-01-27T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.396099 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.396139 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.396149 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.396162 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.396174 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:45Z","lastTransitionTime":"2026-01-27T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.498973 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.499020 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.499032 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.499050 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.499063 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:45Z","lastTransitionTime":"2026-01-27T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.514688 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 15:41:02.91093428 +0000 UTC Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.602058 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.602104 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.602116 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.602174 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.602192 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:45Z","lastTransitionTime":"2026-01-27T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.705020 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.705070 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.705081 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.705097 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.705112 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:45Z","lastTransitionTime":"2026-01-27T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.808137 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.808184 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.808196 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.808214 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.808226 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:45Z","lastTransitionTime":"2026-01-27T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.911364 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.911421 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.911442 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.911466 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:45 crc kubenswrapper[4914]: I0127 13:45:45.911501 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:45Z","lastTransitionTime":"2026-01-27T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.013395 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.013435 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.013445 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.013460 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.013471 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:46Z","lastTransitionTime":"2026-01-27T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.116308 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.116349 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.116359 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.116374 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.116385 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:46Z","lastTransitionTime":"2026-01-27T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.219700 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.219760 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.219768 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.219784 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.219801 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:46Z","lastTransitionTime":"2026-01-27T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.294331 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.294365 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.294415 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.294419 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:46 crc kubenswrapper[4914]: E0127 13:45:46.294472 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:46 crc kubenswrapper[4914]: E0127 13:45:46.294541 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:46 crc kubenswrapper[4914]: E0127 13:45:46.294595 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:46 crc kubenswrapper[4914]: E0127 13:45:46.294648 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.322908 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.322952 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.322961 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.322974 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.322984 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:46Z","lastTransitionTime":"2026-01-27T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.426130 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.426179 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.426192 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.426207 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.426218 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:46Z","lastTransitionTime":"2026-01-27T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.515254 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 20:35:04.35239457 +0000 UTC Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.528168 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.528218 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.528230 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.528247 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.528259 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:46Z","lastTransitionTime":"2026-01-27T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.630213 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.630263 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.630275 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.630292 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.630306 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:46Z","lastTransitionTime":"2026-01-27T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.733132 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.733169 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.733177 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.733191 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.733201 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:46Z","lastTransitionTime":"2026-01-27T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.835700 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.835753 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.835764 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.835781 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.835793 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:46Z","lastTransitionTime":"2026-01-27T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.937964 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.938007 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.938015 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.938030 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:46 crc kubenswrapper[4914]: I0127 13:45:46.938040 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:46Z","lastTransitionTime":"2026-01-27T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.040195 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.040240 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.040253 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.040272 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.040283 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:47Z","lastTransitionTime":"2026-01-27T13:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.143071 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.143115 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.143128 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.143146 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.143158 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:47Z","lastTransitionTime":"2026-01-27T13:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.245308 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.245353 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.245376 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.245400 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.245416 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:47Z","lastTransitionTime":"2026-01-27T13:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.347288 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.347364 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.347380 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.347401 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.347416 4914 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:45:47Z","lastTransitionTime":"2026-01-27T13:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.391404 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5"] Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.391808 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.393716 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.393854 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.393963 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.394070 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.494948 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-txqj5\" (UID: \"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.495028 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-txqj5\" (UID: \"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.495048 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-txqj5\" (UID: \"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.495108 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-txqj5\" (UID: \"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.495128 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-txqj5\" (UID: \"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.516106 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 04:08:03.457113353 +0000 UTC Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.516160 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.525651 4914 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.596161 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-txqj5\" (UID: \"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.596207 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-txqj5\" (UID: \"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.596234 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-txqj5\" (UID: \"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.596256 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-txqj5\" (UID: \"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.596293 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-txqj5\" (UID: \"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.596371 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-txqj5\" (UID: \"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.596398 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-txqj5\" (UID: \"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.597397 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-txqj5\" (UID: \"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.602644 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-txqj5\" (UID: \"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.613992 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-txqj5\" (UID: \"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:47 crc kubenswrapper[4914]: I0127 13:45:47.706508 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" Jan 27 13:45:48 crc kubenswrapper[4914]: I0127 13:45:48.293631 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:48 crc kubenswrapper[4914]: I0127 13:45:48.293733 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:48 crc kubenswrapper[4914]: E0127 13:45:48.294320 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:48 crc kubenswrapper[4914]: I0127 13:45:48.293867 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:48 crc kubenswrapper[4914]: E0127 13:45:48.294417 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:48 crc kubenswrapper[4914]: I0127 13:45:48.293797 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:48 crc kubenswrapper[4914]: E0127 13:45:48.294514 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:48 crc kubenswrapper[4914]: E0127 13:45:48.294586 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:48 crc kubenswrapper[4914]: I0127 13:45:48.701090 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" event={"ID":"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d","Type":"ContainerStarted","Data":"921039909e981c545b8bbcde18c006e685a001a3a40d0960b7590f58b44a7b0c"} Jan 27 13:45:48 crc kubenswrapper[4914]: I0127 13:45:48.701147 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" event={"ID":"acf70c9c-c0ee-4d8f-8a4b-1eb82d7a364d","Type":"ContainerStarted","Data":"e6a128e4743cc5058ab284dccb02f0b435834151c475a5a95368a6a3cd32cb7a"} Jan 27 13:45:48 crc kubenswrapper[4914]: I0127 13:45:48.715046 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-txqj5" podStartSLOduration=85.715028846 podStartE2EDuration="1m25.715028846s" podCreationTimestamp="2026-01-27 13:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:45:48.714530663 +0000 UTC m=+107.026880758" watchObservedRunningTime="2026-01-27 13:45:48.715028846 +0000 UTC m=+107.027378931" Jan 27 13:45:49 crc kubenswrapper[4914]: I0127 13:45:49.922119 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:45:49 crc kubenswrapper[4914]: I0127 13:45:49.923008 4914 scope.go:117] "RemoveContainer" containerID="a16d765b49acc107009e3c8ebfc08e72f9e2772b3f0b03936a26dd8ff4fa1cf5" Jan 27 13:45:49 crc kubenswrapper[4914]: E0127 13:45:49.923164 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" Jan 27 13:45:50 crc kubenswrapper[4914]: I0127 13:45:50.294065 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:50 crc kubenswrapper[4914]: I0127 13:45:50.294111 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:50 crc kubenswrapper[4914]: I0127 13:45:50.294173 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:50 crc kubenswrapper[4914]: I0127 13:45:50.294173 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:50 crc kubenswrapper[4914]: E0127 13:45:50.294212 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:50 crc kubenswrapper[4914]: E0127 13:45:50.294320 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:50 crc kubenswrapper[4914]: E0127 13:45:50.294408 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:50 crc kubenswrapper[4914]: E0127 13:45:50.294499 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:52 crc kubenswrapper[4914]: I0127 13:45:52.294224 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:52 crc kubenswrapper[4914]: I0127 13:45:52.294265 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:52 crc kubenswrapper[4914]: I0127 13:45:52.294354 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:52 crc kubenswrapper[4914]: I0127 13:45:52.295438 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:52 crc kubenswrapper[4914]: E0127 13:45:52.295658 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:52 crc kubenswrapper[4914]: E0127 13:45:52.295912 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:52 crc kubenswrapper[4914]: E0127 13:45:52.295959 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:52 crc kubenswrapper[4914]: E0127 13:45:52.296019 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:54 crc kubenswrapper[4914]: I0127 13:45:54.293483 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:54 crc kubenswrapper[4914]: I0127 13:45:54.293512 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:54 crc kubenswrapper[4914]: I0127 13:45:54.293631 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:54 crc kubenswrapper[4914]: E0127 13:45:54.293761 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:54 crc kubenswrapper[4914]: I0127 13:45:54.293862 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:54 crc kubenswrapper[4914]: E0127 13:45:54.294029 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:54 crc kubenswrapper[4914]: E0127 13:45:54.294068 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:54 crc kubenswrapper[4914]: E0127 13:45:54.294159 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:56 crc kubenswrapper[4914]: I0127 13:45:56.293241 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:56 crc kubenswrapper[4914]: I0127 13:45:56.293288 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:56 crc kubenswrapper[4914]: I0127 13:45:56.293247 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:56 crc kubenswrapper[4914]: E0127 13:45:56.293501 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:56 crc kubenswrapper[4914]: E0127 13:45:56.293637 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:56 crc kubenswrapper[4914]: E0127 13:45:56.293940 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:56 crc kubenswrapper[4914]: I0127 13:45:56.294287 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:56 crc kubenswrapper[4914]: E0127 13:45:56.294543 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:45:58 crc kubenswrapper[4914]: I0127 13:45:58.294036 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:45:58 crc kubenswrapper[4914]: I0127 13:45:58.294078 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:45:58 crc kubenswrapper[4914]: I0127 13:45:58.294093 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:45:58 crc kubenswrapper[4914]: E0127 13:45:58.294190 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:45:58 crc kubenswrapper[4914]: I0127 13:45:58.294213 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:45:58 crc kubenswrapper[4914]: E0127 13:45:58.294252 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:45:58 crc kubenswrapper[4914]: E0127 13:45:58.294332 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:45:58 crc kubenswrapper[4914]: E0127 13:45:58.294297 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:46:00 crc kubenswrapper[4914]: I0127 13:46:00.294335 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:46:00 crc kubenswrapper[4914]: I0127 13:46:00.294397 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:00 crc kubenswrapper[4914]: I0127 13:46:00.294499 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:00 crc kubenswrapper[4914]: E0127 13:46:00.294494 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:46:00 crc kubenswrapper[4914]: I0127 13:46:00.294356 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:00 crc kubenswrapper[4914]: E0127 13:46:00.294587 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:46:00 crc kubenswrapper[4914]: E0127 13:46:00.294651 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:46:00 crc kubenswrapper[4914]: E0127 13:46:00.294697 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:46:02 crc kubenswrapper[4914]: E0127 13:46:02.262474 4914 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 13:46:02 crc kubenswrapper[4914]: I0127 13:46:02.293597 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:02 crc kubenswrapper[4914]: I0127 13:46:02.293673 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:02 crc kubenswrapper[4914]: I0127 13:46:02.294954 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:02 crc kubenswrapper[4914]: E0127 13:46:02.294947 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:46:02 crc kubenswrapper[4914]: I0127 13:46:02.295015 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:46:02 crc kubenswrapper[4914]: E0127 13:46:02.295220 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:46:02 crc kubenswrapper[4914]: E0127 13:46:02.295277 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:46:02 crc kubenswrapper[4914]: E0127 13:46:02.295633 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:46:02 crc kubenswrapper[4914]: E0127 13:46:02.397684 4914 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 13:46:02 crc kubenswrapper[4914]: I0127 13:46:02.742748 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b628_38170a87-0bc0-4c7d-b7a0-45b86a1f79e3/kube-multus/1.log" Jan 27 13:46:02 crc kubenswrapper[4914]: I0127 13:46:02.743383 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b628_38170a87-0bc0-4c7d-b7a0-45b86a1f79e3/kube-multus/0.log" Jan 27 13:46:02 crc kubenswrapper[4914]: I0127 13:46:02.743451 4914 generic.go:334] "Generic (PLEG): container finished" podID="38170a87-0bc0-4c7d-b7a0-45b86a1f79e3" containerID="cbd46cb10b4609f1c672c23d55dbb211fb5f130fc861dbc837fa5be5b44a2f90" exitCode=1 Jan 27 13:46:02 crc kubenswrapper[4914]: I0127 13:46:02.743492 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6b628" event={"ID":"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3","Type":"ContainerDied","Data":"cbd46cb10b4609f1c672c23d55dbb211fb5f130fc861dbc837fa5be5b44a2f90"} Jan 27 13:46:02 crc kubenswrapper[4914]: I0127 13:46:02.743547 4914 scope.go:117] "RemoveContainer" containerID="d1b6b9aba69297dc05c23c1986f77306502cf7486314b98fe33744c497dff72f" Jan 27 13:46:02 crc kubenswrapper[4914]: I0127 13:46:02.744087 4914 scope.go:117] "RemoveContainer" containerID="cbd46cb10b4609f1c672c23d55dbb211fb5f130fc861dbc837fa5be5b44a2f90" Jan 27 13:46:02 crc kubenswrapper[4914]: E0127 13:46:02.744276 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-6b628_openshift-multus(38170a87-0bc0-4c7d-b7a0-45b86a1f79e3)\"" pod="openshift-multus/multus-6b628" podUID="38170a87-0bc0-4c7d-b7a0-45b86a1f79e3" Jan 27 13:46:03 crc kubenswrapper[4914]: I0127 13:46:03.294942 4914 scope.go:117] "RemoveContainer" containerID="a16d765b49acc107009e3c8ebfc08e72f9e2772b3f0b03936a26dd8ff4fa1cf5" Jan 27 13:46:03 crc kubenswrapper[4914]: E0127 13:46:03.295111 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7m5xg_openshift-ovn-kubernetes(1adce282-c454-4aa2-9cbe-356c7d371f98)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" Jan 27 13:46:03 crc kubenswrapper[4914]: I0127 13:46:03.747361 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b628_38170a87-0bc0-4c7d-b7a0-45b86a1f79e3/kube-multus/1.log" Jan 27 13:46:04 crc kubenswrapper[4914]: I0127 13:46:04.294109 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:04 crc kubenswrapper[4914]: I0127 13:46:04.294121 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:46:04 crc kubenswrapper[4914]: I0127 13:46:04.294193 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:04 crc kubenswrapper[4914]: I0127 13:46:04.294360 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:04 crc kubenswrapper[4914]: E0127 13:46:04.294443 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:46:04 crc kubenswrapper[4914]: E0127 13:46:04.294570 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:46:04 crc kubenswrapper[4914]: E0127 13:46:04.294627 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:46:04 crc kubenswrapper[4914]: E0127 13:46:04.294688 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:46:06 crc kubenswrapper[4914]: I0127 13:46:06.293737 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:06 crc kubenswrapper[4914]: I0127 13:46:06.293794 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:46:06 crc kubenswrapper[4914]: E0127 13:46:06.293897 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:46:06 crc kubenswrapper[4914]: I0127 13:46:06.293729 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:06 crc kubenswrapper[4914]: I0127 13:46:06.293981 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:06 crc kubenswrapper[4914]: E0127 13:46:06.294097 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:46:06 crc kubenswrapper[4914]: E0127 13:46:06.294135 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:46:06 crc kubenswrapper[4914]: E0127 13:46:06.294243 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:46:07 crc kubenswrapper[4914]: E0127 13:46:07.399145 4914 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 13:46:08 crc kubenswrapper[4914]: I0127 13:46:08.293552 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:08 crc kubenswrapper[4914]: I0127 13:46:08.293673 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:46:08 crc kubenswrapper[4914]: E0127 13:46:08.293711 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:46:08 crc kubenswrapper[4914]: I0127 13:46:08.293555 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:08 crc kubenswrapper[4914]: I0127 13:46:08.293753 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:08 crc kubenswrapper[4914]: E0127 13:46:08.293977 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:46:08 crc kubenswrapper[4914]: E0127 13:46:08.294051 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:46:08 crc kubenswrapper[4914]: E0127 13:46:08.294252 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:46:10 crc kubenswrapper[4914]: I0127 13:46:10.293350 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:10 crc kubenswrapper[4914]: I0127 13:46:10.293405 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:10 crc kubenswrapper[4914]: E0127 13:46:10.293477 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:46:10 crc kubenswrapper[4914]: I0127 13:46:10.293543 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:10 crc kubenswrapper[4914]: E0127 13:46:10.293607 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:46:10 crc kubenswrapper[4914]: E0127 13:46:10.293698 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:46:10 crc kubenswrapper[4914]: I0127 13:46:10.293988 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:46:10 crc kubenswrapper[4914]: E0127 13:46:10.294061 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:46:12 crc kubenswrapper[4914]: I0127 13:46:12.293507 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:12 crc kubenswrapper[4914]: I0127 13:46:12.293560 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:46:12 crc kubenswrapper[4914]: E0127 13:46:12.294552 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:46:12 crc kubenswrapper[4914]: I0127 13:46:12.294586 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:12 crc kubenswrapper[4914]: I0127 13:46:12.294564 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:12 crc kubenswrapper[4914]: E0127 13:46:12.294630 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:46:12 crc kubenswrapper[4914]: E0127 13:46:12.294681 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:46:12 crc kubenswrapper[4914]: E0127 13:46:12.294721 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:46:12 crc kubenswrapper[4914]: E0127 13:46:12.399592 4914 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 13:46:14 crc kubenswrapper[4914]: I0127 13:46:14.293361 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:46:14 crc kubenswrapper[4914]: I0127 13:46:14.293412 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:14 crc kubenswrapper[4914]: E0127 13:46:14.293491 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:46:14 crc kubenswrapper[4914]: I0127 13:46:14.293540 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:14 crc kubenswrapper[4914]: E0127 13:46:14.293657 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:46:14 crc kubenswrapper[4914]: I0127 13:46:14.293700 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:14 crc kubenswrapper[4914]: E0127 13:46:14.293723 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:46:14 crc kubenswrapper[4914]: E0127 13:46:14.293821 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:46:16 crc kubenswrapper[4914]: I0127 13:46:16.294003 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:16 crc kubenswrapper[4914]: E0127 13:46:16.294201 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:46:16 crc kubenswrapper[4914]: I0127 13:46:16.294252 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:46:16 crc kubenswrapper[4914]: I0127 13:46:16.294332 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:16 crc kubenswrapper[4914]: I0127 13:46:16.294363 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:16 crc kubenswrapper[4914]: E0127 13:46:16.294434 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:46:16 crc kubenswrapper[4914]: E0127 13:46:16.294477 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:46:16 crc kubenswrapper[4914]: E0127 13:46:16.294531 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:46:16 crc kubenswrapper[4914]: I0127 13:46:16.295559 4914 scope.go:117] "RemoveContainer" containerID="a16d765b49acc107009e3c8ebfc08e72f9e2772b3f0b03936a26dd8ff4fa1cf5" Jan 27 13:46:16 crc kubenswrapper[4914]: I0127 13:46:16.789125 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovnkube-controller/3.log" Jan 27 13:46:16 crc kubenswrapper[4914]: I0127 13:46:16.791425 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerStarted","Data":"4c154c6ae2cf3ee27bc31d0512ec48f46cf0ed6d8ebf30769e48987700f5d73c"} Jan 27 13:46:16 crc kubenswrapper[4914]: I0127 13:46:16.791976 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:46:16 crc kubenswrapper[4914]: I0127 13:46:16.816462 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podStartSLOduration=112.816443568 podStartE2EDuration="1m52.816443568s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:16.815300527 +0000 UTC m=+135.127650612" watchObservedRunningTime="2026-01-27 13:46:16.816443568 +0000 UTC m=+135.128793653" Jan 27 13:46:17 crc kubenswrapper[4914]: I0127 13:46:17.151598 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-22nld"] Jan 27 13:46:17 crc kubenswrapper[4914]: I0127 13:46:17.151717 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:17 crc kubenswrapper[4914]: E0127 13:46:17.151855 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:46:17 crc kubenswrapper[4914]: I0127 13:46:17.294291 4914 scope.go:117] "RemoveContainer" containerID="cbd46cb10b4609f1c672c23d55dbb211fb5f130fc861dbc837fa5be5b44a2f90" Jan 27 13:46:17 crc kubenswrapper[4914]: E0127 13:46:17.400301 4914 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 13:46:17 crc kubenswrapper[4914]: I0127 13:46:17.796423 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b628_38170a87-0bc0-4c7d-b7a0-45b86a1f79e3/kube-multus/1.log" Jan 27 13:46:17 crc kubenswrapper[4914]: I0127 13:46:17.796511 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6b628" event={"ID":"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3","Type":"ContainerStarted","Data":"807f07d403d8741c6222e04cae0ce46ded60de609107b76c001b4f4282dcbb15"} Jan 27 13:46:18 crc kubenswrapper[4914]: I0127 13:46:18.293768 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:18 crc kubenswrapper[4914]: I0127 13:46:18.293847 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:46:18 crc kubenswrapper[4914]: I0127 13:46:18.293899 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:18 crc kubenswrapper[4914]: I0127 13:46:18.294031 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:18 crc kubenswrapper[4914]: E0127 13:46:18.294027 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:46:18 crc kubenswrapper[4914]: E0127 13:46:18.294150 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:46:18 crc kubenswrapper[4914]: E0127 13:46:18.294230 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:46:18 crc kubenswrapper[4914]: E0127 13:46:18.294356 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:46:20 crc kubenswrapper[4914]: I0127 13:46:20.294161 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:20 crc kubenswrapper[4914]: I0127 13:46:20.294204 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:46:20 crc kubenswrapper[4914]: I0127 13:46:20.294216 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:20 crc kubenswrapper[4914]: I0127 13:46:20.294148 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:20 crc kubenswrapper[4914]: E0127 13:46:20.294312 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:46:20 crc kubenswrapper[4914]: E0127 13:46:20.294467 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:46:20 crc kubenswrapper[4914]: E0127 13:46:20.294546 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:46:20 crc kubenswrapper[4914]: E0127 13:46:20.294627 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:46:22 crc kubenswrapper[4914]: I0127 13:46:22.293673 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:22 crc kubenswrapper[4914]: I0127 13:46:22.293724 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:22 crc kubenswrapper[4914]: I0127 13:46:22.293695 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:22 crc kubenswrapper[4914]: E0127 13:46:22.294621 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:46:22 crc kubenswrapper[4914]: I0127 13:46:22.294636 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:46:22 crc kubenswrapper[4914]: E0127 13:46:22.294682 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:46:22 crc kubenswrapper[4914]: E0127 13:46:22.294732 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:46:22 crc kubenswrapper[4914]: E0127 13:46:22.294799 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-22nld" podUID="72d4d49f-291e-448e-81eb-0895324cd4ae" Jan 27 13:46:24 crc kubenswrapper[4914]: I0127 13:46:24.294328 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:24 crc kubenswrapper[4914]: I0127 13:46:24.294334 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:24 crc kubenswrapper[4914]: I0127 13:46:24.294328 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:46:24 crc kubenswrapper[4914]: I0127 13:46:24.294340 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:24 crc kubenswrapper[4914]: I0127 13:46:24.297022 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 13:46:24 crc kubenswrapper[4914]: I0127 13:46:24.297196 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 13:46:24 crc kubenswrapper[4914]: I0127 13:46:24.297643 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 13:46:24 crc kubenswrapper[4914]: I0127 13:46:24.297753 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 13:46:24 crc kubenswrapper[4914]: I0127 13:46:24.297757 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 13:46:24 crc kubenswrapper[4914]: I0127 13:46:24.297885 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.397136 4914 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.427089 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.427592 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.427945 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.428507 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.430013 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.430202 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.430475 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-rd557"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.430969 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.432459 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z74jx"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.432952 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.434854 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-cmsxp"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.435289 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.437702 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 13:46:28 crc kubenswrapper[4914]: W0127 13:46:28.438756 4914 reflector.go:561] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.438789 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: E0127 13:46:28.438791 4914 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.438848 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.439454 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.439666 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.439676 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.439790 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.439920 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.439999 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.440013 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.440025 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-f8vp8"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.440515 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.440891 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.441351 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.446320 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.446445 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.446509 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.446874 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.447004 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.447119 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.447227 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.447333 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.447595 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.452998 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453054 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453080 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453129 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453006 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453289 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453365 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453442 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453480 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453501 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453557 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453606 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453627 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453703 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453788 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453818 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453885 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.453919 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.454192 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.459783 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-rktkr"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.460102 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.487437 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.487610 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.489547 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b4l6h"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.489657 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.489990 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.490657 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-b4l6h" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.491137 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.491673 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.491938 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.492647 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.492924 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.492947 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wnsrs"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.493286 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.493487 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wnsrs" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.493526 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.493712 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.494153 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.496478 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vw28j"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.498499 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.498577 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.498751 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.498884 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.498903 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.498926 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.499018 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.499047 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.499279 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.499392 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.499453 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.499514 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.499625 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.499685 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.499973 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vw28j" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.502638 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-mwzf8"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.503072 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.503369 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.503649 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mwzf8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.503992 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-62njv"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.504286 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.505180 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-f8vp8"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.505272 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.505661 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.506645 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.512909 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.514711 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-cmsxp"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.518426 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.519245 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.519891 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520381 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgtxf\" (UniqueName: \"kubernetes.io/projected/9899e103-466c-4fb4-887b-916ca5e7ca72-kube-api-access-lgtxf\") pod \"openshift-config-operator-7777fb866f-k9l4r\" (UID: \"9899e103-466c-4fb4-887b-916ca5e7ca72\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520437 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-client-ca\") pod \"controller-manager-879f6c89f-f8vp8\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520486 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84fb8194-82c6-414f-ab5d-4948ce0c48fb-service-ca-bundle\") pod \"authentication-operator-69f744f599-cmsxp\" (UID: \"84fb8194-82c6-414f-ab5d-4948ce0c48fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520507 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6bbe5d3-1e4c-4790-9216-6cc5499a2e09-config\") pod \"machine-approver-56656f9798-rd557\" (UID: \"b6bbe5d3-1e4c-4790-9216-6cc5499a2e09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520525 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/543ed275-142d-4301-a0c2-33a99233ee0d-audit-policies\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520541 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/543ed275-142d-4301-a0c2-33a99233ee0d-audit-dir\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520558 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnsvl\" (UniqueName: \"kubernetes.io/projected/0074c027-d7a9-4958-81dc-65a378eb8910-kube-api-access-mnsvl\") pod \"machine-api-operator-5694c8668f-z74jx\" (UID: \"0074c027-d7a9-4958-81dc-65a378eb8910\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520573 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/543ed275-142d-4301-a0c2-33a99233ee0d-etcd-client\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520589 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/543ed275-142d-4301-a0c2-33a99233ee0d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520606 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-477bn\" (UniqueName: \"kubernetes.io/projected/543ed275-142d-4301-a0c2-33a99233ee0d-kube-api-access-477bn\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520629 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bc9d257-6992-48cf-963b-42c22a5dd170-serving-cert\") pod \"controller-manager-879f6c89f-f8vp8\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520693 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-f8vp8\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520724 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-oauth-serving-cert\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520750 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84fb8194-82c6-414f-ab5d-4948ce0c48fb-serving-cert\") pod \"authentication-operator-69f744f599-cmsxp\" (UID: \"84fb8194-82c6-414f-ab5d-4948ce0c48fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520775 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9899e103-466c-4fb4-887b-916ca5e7ca72-available-featuregates\") pod \"openshift-config-operator-7777fb866f-k9l4r\" (UID: \"9899e103-466c-4fb4-887b-916ca5e7ca72\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520810 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-service-ca\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520866 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6bbe5d3-1e4c-4790-9216-6cc5499a2e09-auth-proxy-config\") pod \"machine-approver-56656f9798-rd557\" (UID: \"b6bbe5d3-1e4c-4790-9216-6cc5499a2e09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520887 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90260720-9ce0-4da9-932b-34f7ce235091-console-oauth-config\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520907 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84fb8194-82c6-414f-ab5d-4948ce0c48fb-config\") pod \"authentication-operator-69f744f599-cmsxp\" (UID: \"84fb8194-82c6-414f-ab5d-4948ce0c48fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520932 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfhnc\" (UniqueName: \"kubernetes.io/projected/5bc9d257-6992-48cf-963b-42c22a5dd170-kube-api-access-kfhnc\") pod \"controller-manager-879f6c89f-f8vp8\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520952 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs6qf\" (UniqueName: \"kubernetes.io/projected/b6bbe5d3-1e4c-4790-9216-6cc5499a2e09-kube-api-access-rs6qf\") pod \"machine-approver-56656f9798-rd557\" (UID: \"b6bbe5d3-1e4c-4790-9216-6cc5499a2e09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520973 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-console-config\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.520995 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0074c027-d7a9-4958-81dc-65a378eb8910-config\") pod \"machine-api-operator-5694c8668f-z74jx\" (UID: \"0074c027-d7a9-4958-81dc-65a378eb8910\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.521017 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/543ed275-142d-4301-a0c2-33a99233ee0d-encryption-config\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.521039 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5fs6\" (UniqueName: \"kubernetes.io/projected/fd8b40eb-b619-4662-b1d1-056c912b7d88-kube-api-access-r5fs6\") pod \"openshift-apiserver-operator-796bbdcf4f-l2wc2\" (UID: \"fd8b40eb-b619-4662-b1d1-056c912b7d88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.521060 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0074c027-d7a9-4958-81dc-65a378eb8910-images\") pod \"machine-api-operator-5694c8668f-z74jx\" (UID: \"0074c027-d7a9-4958-81dc-65a378eb8910\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.521088 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84fb8194-82c6-414f-ab5d-4948ce0c48fb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-cmsxp\" (UID: \"84fb8194-82c6-414f-ab5d-4948ce0c48fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.521111 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0074c027-d7a9-4958-81dc-65a378eb8910-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z74jx\" (UID: \"0074c027-d7a9-4958-81dc-65a378eb8910\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.521131 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw9dg\" (UniqueName: \"kubernetes.io/projected/84fb8194-82c6-414f-ab5d-4948ce0c48fb-kube-api-access-nw9dg\") pod \"authentication-operator-69f744f599-cmsxp\" (UID: \"84fb8194-82c6-414f-ab5d-4948ce0c48fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.521153 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90260720-9ce0-4da9-932b-34f7ce235091-console-serving-cert\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.521176 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd8b40eb-b619-4662-b1d1-056c912b7d88-config\") pod \"openshift-apiserver-operator-796bbdcf4f-l2wc2\" (UID: \"fd8b40eb-b619-4662-b1d1-056c912b7d88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.521195 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b6bbe5d3-1e4c-4790-9216-6cc5499a2e09-machine-approver-tls\") pod \"machine-approver-56656f9798-rd557\" (UID: \"b6bbe5d3-1e4c-4790-9216-6cc5499a2e09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.521216 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-trusted-ca-bundle\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.521237 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/543ed275-142d-4301-a0c2-33a99233ee0d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.521258 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/543ed275-142d-4301-a0c2-33a99233ee0d-serving-cert\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.521283 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9899e103-466c-4fb4-887b-916ca5e7ca72-serving-cert\") pod \"openshift-config-operator-7777fb866f-k9l4r\" (UID: \"9899e103-466c-4fb4-887b-916ca5e7ca72\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.521304 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd8b40eb-b619-4662-b1d1-056c912b7d88-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-l2wc2\" (UID: \"fd8b40eb-b619-4662-b1d1-056c912b7d88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.521329 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-config\") pod \"controller-manager-879f6c89f-f8vp8\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.521348 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsjfm\" (UniqueName: \"kubernetes.io/projected/90260720-9ce0-4da9-932b-34f7ce235091-kube-api-access-gsjfm\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.523000 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.524110 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z74jx"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.524659 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.525812 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.525855 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.525975 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.526009 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.526102 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.526154 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.526233 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.577982 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.579973 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.581133 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4jdfm"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.581820 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.582491 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.583034 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.586486 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.587257 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.587520 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.587715 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.587792 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.587915 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.588290 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rhxcj"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.588375 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.588429 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.588556 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.588625 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.589153 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.589261 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.589347 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.589433 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.589500 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.591685 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.592650 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.597270 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.597613 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6tlvd"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.597737 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.598092 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.598424 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.598618 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.598854 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.598882 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.598940 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.598965 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.599141 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.599353 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.600766 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.601636 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.601742 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.602306 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.602315 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rkhw5"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.603062 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rkhw5" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.603345 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlnd8"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.603803 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlnd8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.604110 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hsggd"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.604645 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hsggd" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.605622 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w75kk"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.606192 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.606605 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.606913 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.606932 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.607620 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.608289 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-49b2h"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.608815 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-49b2h" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.609105 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.609586 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.610053 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-vm26v"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.610409 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.610962 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.611772 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.613215 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.613457 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.621009 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cptz2"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.621177 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.621781 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/745ec1ee-15c4-456b-9e1e-9015e27c4845-serving-cert\") pod \"route-controller-manager-6576b87f9c-nv9bs\" (UID: \"745ec1ee-15c4-456b-9e1e-9015e27c4845\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622235 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/589608b0-5454-404a-acd7-f164145a1bc0-node-pullsecrets\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622280 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/543ed275-142d-4301-a0c2-33a99233ee0d-audit-dir\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622341 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6bbe5d3-1e4c-4790-9216-6cc5499a2e09-config\") pod \"machine-approver-56656f9798-rd557\" (UID: \"b6bbe5d3-1e4c-4790-9216-6cc5499a2e09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622370 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6ff4\" (UniqueName: \"kubernetes.io/projected/745ec1ee-15c4-456b-9e1e-9015e27c4845-kube-api-access-p6ff4\") pod \"route-controller-manager-6576b87f9c-nv9bs\" (UID: \"745ec1ee-15c4-456b-9e1e-9015e27c4845\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622397 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/543ed275-142d-4301-a0c2-33a99233ee0d-audit-dir\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622508 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/543ed275-142d-4301-a0c2-33a99233ee0d-audit-policies\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622537 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnsvl\" (UniqueName: \"kubernetes.io/projected/0074c027-d7a9-4958-81dc-65a378eb8910-kube-api-access-mnsvl\") pod \"machine-api-operator-5694c8668f-z74jx\" (UID: \"0074c027-d7a9-4958-81dc-65a378eb8910\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622564 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/543ed275-142d-4301-a0c2-33a99233ee0d-etcd-client\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622583 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/543ed275-142d-4301-a0c2-33a99233ee0d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622605 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-477bn\" (UniqueName: \"kubernetes.io/projected/543ed275-142d-4301-a0c2-33a99233ee0d-kube-api-access-477bn\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622643 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/109be131-7cbd-4205-b5c7-eaf7790737f4-audit-dir\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622664 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622691 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/589608b0-5454-404a-acd7-f164145a1bc0-audit\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622714 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bc9d257-6992-48cf-963b-42c22a5dd170-serving-cert\") pod \"controller-manager-879f6c89f-f8vp8\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622744 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622765 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-oauth-serving-cert\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622818 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z48v6\" (UniqueName: \"kubernetes.io/projected/e7c1b522-886f-41d2-b7da-a6d4316c3b31-kube-api-access-z48v6\") pod \"machine-config-controller-84d6567774-2wzjv\" (UID: \"e7c1b522-886f-41d2-b7da-a6d4316c3b31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622946 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6bbe5d3-1e4c-4790-9216-6cc5499a2e09-config\") pod \"machine-approver-56656f9798-rd557\" (UID: \"b6bbe5d3-1e4c-4790-9216-6cc5499a2e09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622948 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfxjl\" (UniqueName: \"kubernetes.io/projected/589608b0-5454-404a-acd7-f164145a1bc0-kube-api-access-zfxjl\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.622997 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-f8vp8\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623025 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623063 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/745ec1ee-15c4-456b-9e1e-9015e27c4845-config\") pod \"route-controller-manager-6576b87f9c-nv9bs\" (UID: \"745ec1ee-15c4-456b-9e1e-9015e27c4845\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623094 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84fb8194-82c6-414f-ab5d-4948ce0c48fb-serving-cert\") pod \"authentication-operator-69f744f599-cmsxp\" (UID: \"84fb8194-82c6-414f-ab5d-4948ce0c48fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623116 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9899e103-466c-4fb4-887b-916ca5e7ca72-available-featuregates\") pod \"openshift-config-operator-7777fb866f-k9l4r\" (UID: \"9899e103-466c-4fb4-887b-916ca5e7ca72\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623133 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-service-ca\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623148 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623174 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e7c1b522-886f-41d2-b7da-a6d4316c3b31-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2wzjv\" (UID: \"e7c1b522-886f-41d2-b7da-a6d4316c3b31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623191 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623216 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg7gn\" (UniqueName: \"kubernetes.io/projected/7add3664-f0a1-4575-bc02-ff364cf808b7-kube-api-access-kg7gn\") pod \"downloads-7954f5f757-mwzf8\" (UID: \"7add3664-f0a1-4575-bc02-ff364cf808b7\") " pod="openshift-console/downloads-7954f5f757-mwzf8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623233 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6bbe5d3-1e4c-4790-9216-6cc5499a2e09-auth-proxy-config\") pod \"machine-approver-56656f9798-rd557\" (UID: \"b6bbe5d3-1e4c-4790-9216-6cc5499a2e09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623248 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90260720-9ce0-4da9-932b-34f7ce235091-console-oauth-config\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623264 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84fb8194-82c6-414f-ab5d-4948ce0c48fb-config\") pod \"authentication-operator-69f744f599-cmsxp\" (UID: \"84fb8194-82c6-414f-ab5d-4948ce0c48fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623279 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/630b7825-f758-4b21-ad2a-f08f54b23dfb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vw28j\" (UID: \"630b7825-f758-4b21-ad2a-f08f54b23dfb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vw28j" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623295 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfhnc\" (UniqueName: \"kubernetes.io/projected/5bc9d257-6992-48cf-963b-42c22a5dd170-kube-api-access-kfhnc\") pod \"controller-manager-879f6c89f-f8vp8\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623312 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e7c1b522-886f-41d2-b7da-a6d4316c3b31-proxy-tls\") pod \"machine-config-controller-84d6567774-2wzjv\" (UID: \"e7c1b522-886f-41d2-b7da-a6d4316c3b31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623328 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5fs6\" (UniqueName: \"kubernetes.io/projected/fd8b40eb-b619-4662-b1d1-056c912b7d88-kube-api-access-r5fs6\") pod \"openshift-apiserver-operator-796bbdcf4f-l2wc2\" (UID: \"fd8b40eb-b619-4662-b1d1-056c912b7d88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623341 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs6qf\" (UniqueName: \"kubernetes.io/projected/b6bbe5d3-1e4c-4790-9216-6cc5499a2e09-kube-api-access-rs6qf\") pod \"machine-approver-56656f9798-rd557\" (UID: \"b6bbe5d3-1e4c-4790-9216-6cc5499a2e09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623357 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-console-config\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623371 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0074c027-d7a9-4958-81dc-65a378eb8910-config\") pod \"machine-api-operator-5694c8668f-z74jx\" (UID: \"0074c027-d7a9-4958-81dc-65a378eb8910\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623385 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623399 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/543ed275-142d-4301-a0c2-33a99233ee0d-encryption-config\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623413 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0074c027-d7a9-4958-81dc-65a378eb8910-images\") pod \"machine-api-operator-5694c8668f-z74jx\" (UID: \"0074c027-d7a9-4958-81dc-65a378eb8910\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623438 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/589608b0-5454-404a-acd7-f164145a1bc0-audit-dir\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623455 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84fb8194-82c6-414f-ab5d-4948ce0c48fb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-cmsxp\" (UID: \"84fb8194-82c6-414f-ab5d-4948ce0c48fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623472 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623489 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0074c027-d7a9-4958-81dc-65a378eb8910-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z74jx\" (UID: \"0074c027-d7a9-4958-81dc-65a378eb8910\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623505 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90260720-9ce0-4da9-932b-34f7ce235091-console-serving-cert\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623522 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw9dg\" (UniqueName: \"kubernetes.io/projected/84fb8194-82c6-414f-ab5d-4948ce0c48fb-kube-api-access-nw9dg\") pod \"authentication-operator-69f744f599-cmsxp\" (UID: \"84fb8194-82c6-414f-ab5d-4948ce0c48fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623538 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623552 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/745ec1ee-15c4-456b-9e1e-9015e27c4845-client-ca\") pod \"route-controller-manager-6576b87f9c-nv9bs\" (UID: \"745ec1ee-15c4-456b-9e1e-9015e27c4845\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623567 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/589608b0-5454-404a-acd7-f164145a1bc0-encryption-config\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623582 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd8b40eb-b619-4662-b1d1-056c912b7d88-config\") pod \"openshift-apiserver-operator-796bbdcf4f-l2wc2\" (UID: \"fd8b40eb-b619-4662-b1d1-056c912b7d88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623597 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6btz\" (UniqueName: \"kubernetes.io/projected/109be131-7cbd-4205-b5c7-eaf7790737f4-kube-api-access-t6btz\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623611 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/589608b0-5454-404a-acd7-f164145a1bc0-etcd-client\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623625 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/543ed275-142d-4301-a0c2-33a99233ee0d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623640 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b6bbe5d3-1e4c-4790-9216-6cc5499a2e09-machine-approver-tls\") pod \"machine-approver-56656f9798-rd557\" (UID: \"b6bbe5d3-1e4c-4790-9216-6cc5499a2e09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623655 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-trusted-ca-bundle\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623671 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623756 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623911 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/589608b0-5454-404a-acd7-f164145a1bc0-etcd-serving-ca\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623946 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623965 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/543ed275-142d-4301-a0c2-33a99233ee0d-serving-cert\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.624001 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9899e103-466c-4fb4-887b-916ca5e7ca72-serving-cert\") pod \"openshift-config-operator-7777fb866f-k9l4r\" (UID: \"9899e103-466c-4fb4-887b-916ca5e7ca72\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.624060 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd8b40eb-b619-4662-b1d1-056c912b7d88-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-l2wc2\" (UID: \"fd8b40eb-b619-4662-b1d1-056c912b7d88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.624079 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb7hm\" (UniqueName: \"kubernetes.io/projected/630b7825-f758-4b21-ad2a-f08f54b23dfb-kube-api-access-gb7hm\") pod \"cluster-samples-operator-665b6dd947-vw28j\" (UID: \"630b7825-f758-4b21-ad2a-f08f54b23dfb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vw28j" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.624094 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/589608b0-5454-404a-acd7-f164145a1bc0-serving-cert\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.624111 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/589608b0-5454-404a-acd7-f164145a1bc0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.624129 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-config\") pod \"controller-manager-879f6c89f-f8vp8\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.624144 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsjfm\" (UniqueName: \"kubernetes.io/projected/90260720-9ce0-4da9-932b-34f7ce235091-kube-api-access-gsjfm\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.624160 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgtxf\" (UniqueName: \"kubernetes.io/projected/9899e103-466c-4fb4-887b-916ca5e7ca72-kube-api-access-lgtxf\") pod \"openshift-config-operator-7777fb866f-k9l4r\" (UID: \"9899e103-466c-4fb4-887b-916ca5e7ca72\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.624175 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-audit-policies\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.624199 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-client-ca\") pod \"controller-manager-879f6c89f-f8vp8\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.624236 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/589608b0-5454-404a-acd7-f164145a1bc0-image-import-ca\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.624264 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/589608b0-5454-404a-acd7-f164145a1bc0-config\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.624282 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84fb8194-82c6-414f-ab5d-4948ce0c48fb-service-ca-bundle\") pod \"authentication-operator-69f744f599-cmsxp\" (UID: \"84fb8194-82c6-414f-ab5d-4948ce0c48fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.624748 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84fb8194-82c6-414f-ab5d-4948ce0c48fb-service-ca-bundle\") pod \"authentication-operator-69f744f599-cmsxp\" (UID: \"84fb8194-82c6-414f-ab5d-4948ce0c48fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.623728 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/543ed275-142d-4301-a0c2-33a99233ee0d-audit-policies\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.629022 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-f8vp8\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.629090 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.634727 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/543ed275-142d-4301-a0c2-33a99233ee0d-etcd-client\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.635172 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.635236 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/543ed275-142d-4301-a0c2-33a99233ee0d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.641247 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84fb8194-82c6-414f-ab5d-4948ce0c48fb-config\") pod \"authentication-operator-69f744f599-cmsxp\" (UID: \"84fb8194-82c6-414f-ab5d-4948ce0c48fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.669705 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0074c027-d7a9-4958-81dc-65a378eb8910-config\") pod \"machine-api-operator-5694c8668f-z74jx\" (UID: \"0074c027-d7a9-4958-81dc-65a378eb8910\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.669796 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd8b40eb-b619-4662-b1d1-056c912b7d88-config\") pod \"openshift-apiserver-operator-796bbdcf4f-l2wc2\" (UID: \"fd8b40eb-b619-4662-b1d1-056c912b7d88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.670396 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-console-config\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.670801 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.671061 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cptz2" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.671893 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/543ed275-142d-4301-a0c2-33a99233ee0d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.672568 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-config\") pod \"controller-manager-879f6c89f-f8vp8\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.672968 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-trusted-ca-bundle\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.673360 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b6bbe5d3-1e4c-4790-9216-6cc5499a2e09-machine-approver-tls\") pod \"machine-approver-56656f9798-rd557\" (UID: \"b6bbe5d3-1e4c-4790-9216-6cc5499a2e09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.673536 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-oauth-serving-cert\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.673606 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9899e103-466c-4fb4-887b-916ca5e7ca72-available-featuregates\") pod \"openshift-config-operator-7777fb866f-k9l4r\" (UID: \"9899e103-466c-4fb4-887b-916ca5e7ca72\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.674147 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84fb8194-82c6-414f-ab5d-4948ce0c48fb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-cmsxp\" (UID: \"84fb8194-82c6-414f-ab5d-4948ce0c48fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.674672 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-client-ca\") pod \"controller-manager-879f6c89f-f8vp8\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.675444 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-service-ca\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.675715 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.676118 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.676734 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.677245 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.641098 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84fb8194-82c6-414f-ab5d-4948ce0c48fb-serving-cert\") pod \"authentication-operator-69f744f599-cmsxp\" (UID: \"84fb8194-82c6-414f-ab5d-4948ce0c48fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.679090 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.679362 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.680891 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0074c027-d7a9-4958-81dc-65a378eb8910-images\") pod \"machine-api-operator-5694c8668f-z74jx\" (UID: \"0074c027-d7a9-4958-81dc-65a378eb8910\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.681426 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bc9d257-6992-48cf-963b-42c22a5dd170-serving-cert\") pod \"controller-manager-879f6c89f-f8vp8\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.682781 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6bbe5d3-1e4c-4790-9216-6cc5499a2e09-auth-proxy-config\") pod \"machine-approver-56656f9798-rd557\" (UID: \"b6bbe5d3-1e4c-4790-9216-6cc5499a2e09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.686881 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90260720-9ce0-4da9-932b-34f7ce235091-console-serving-cert\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.687125 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/543ed275-142d-4301-a0c2-33a99233ee0d-encryption-config\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.687168 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8skmg"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.687220 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.688073 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd8b40eb-b619-4662-b1d1-056c912b7d88-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-l2wc2\" (UID: \"fd8b40eb-b619-4662-b1d1-056c912b7d88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.688303 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/543ed275-142d-4301-a0c2-33a99233ee0d-serving-cert\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.689043 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8skmg" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.689085 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.689520 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90260720-9ce0-4da9-932b-34f7ce235091-console-oauth-config\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.689575 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.698215 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b4l6h"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.698267 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rktkr"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.702654 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9899e103-466c-4fb4-887b-916ca5e7ca72-serving-cert\") pod \"openshift-config-operator-7777fb866f-k9l4r\" (UID: \"9899e103-466c-4fb4-887b-916ca5e7ca72\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.704850 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.706652 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0074c027-d7a9-4958-81dc-65a378eb8910-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z74jx\" (UID: \"0074c027-d7a9-4958-81dc-65a378eb8910\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.706867 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mwzf8"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.707811 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-62njv"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.708965 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wnsrs"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.712772 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.712959 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.715893 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.718037 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w75kk"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.718372 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.719605 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.720468 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.725844 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vw28j"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.726562 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/745ec1ee-15c4-456b-9e1e-9015e27c4845-serving-cert\") pod \"route-controller-manager-6576b87f9c-nv9bs\" (UID: \"745ec1ee-15c4-456b-9e1e-9015e27c4845\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.726719 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/589608b0-5454-404a-acd7-f164145a1bc0-node-pullsecrets\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727114 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6ff4\" (UniqueName: \"kubernetes.io/projected/745ec1ee-15c4-456b-9e1e-9015e27c4845-kube-api-access-p6ff4\") pod \"route-controller-manager-6576b87f9c-nv9bs\" (UID: \"745ec1ee-15c4-456b-9e1e-9015e27c4845\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727260 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/109be131-7cbd-4205-b5c7-eaf7790737f4-audit-dir\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727151 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/589608b0-5454-404a-acd7-f164145a1bc0-node-pullsecrets\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727289 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/109be131-7cbd-4205-b5c7-eaf7790737f4-audit-dir\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727369 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727442 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/589608b0-5454-404a-acd7-f164145a1bc0-audit\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727475 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727498 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z48v6\" (UniqueName: \"kubernetes.io/projected/e7c1b522-886f-41d2-b7da-a6d4316c3b31-kube-api-access-z48v6\") pod \"machine-config-controller-84d6567774-2wzjv\" (UID: \"e7c1b522-886f-41d2-b7da-a6d4316c3b31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727516 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfxjl\" (UniqueName: \"kubernetes.io/projected/589608b0-5454-404a-acd7-f164145a1bc0-kube-api-access-zfxjl\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727536 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727555 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/745ec1ee-15c4-456b-9e1e-9015e27c4845-config\") pod \"route-controller-manager-6576b87f9c-nv9bs\" (UID: \"745ec1ee-15c4-456b-9e1e-9015e27c4845\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727585 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727608 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e7c1b522-886f-41d2-b7da-a6d4316c3b31-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2wzjv\" (UID: \"e7c1b522-886f-41d2-b7da-a6d4316c3b31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727636 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg7gn\" (UniqueName: \"kubernetes.io/projected/7add3664-f0a1-4575-bc02-ff364cf808b7-kube-api-access-kg7gn\") pod \"downloads-7954f5f757-mwzf8\" (UID: \"7add3664-f0a1-4575-bc02-ff364cf808b7\") " pod="openshift-console/downloads-7954f5f757-mwzf8" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727657 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727681 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/630b7825-f758-4b21-ad2a-f08f54b23dfb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vw28j\" (UID: \"630b7825-f758-4b21-ad2a-f08f54b23dfb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vw28j" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727708 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e7c1b522-886f-41d2-b7da-a6d4316c3b31-proxy-tls\") pod \"machine-config-controller-84d6567774-2wzjv\" (UID: \"e7c1b522-886f-41d2-b7da-a6d4316c3b31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727722 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727759 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/589608b0-5454-404a-acd7-f164145a1bc0-audit-dir\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727776 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727791 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/589608b0-5454-404a-acd7-f164145a1bc0-encryption-config\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727813 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727827 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/745ec1ee-15c4-456b-9e1e-9015e27c4845-client-ca\") pod \"route-controller-manager-6576b87f9c-nv9bs\" (UID: \"745ec1ee-15c4-456b-9e1e-9015e27c4845\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727862 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6btz\" (UniqueName: \"kubernetes.io/projected/109be131-7cbd-4205-b5c7-eaf7790737f4-kube-api-access-t6btz\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727878 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/589608b0-5454-404a-acd7-f164145a1bc0-etcd-client\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727900 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727916 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727932 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727947 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/589608b0-5454-404a-acd7-f164145a1bc0-etcd-serving-ca\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727962 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb7hm\" (UniqueName: \"kubernetes.io/projected/630b7825-f758-4b21-ad2a-f08f54b23dfb-kube-api-access-gb7hm\") pod \"cluster-samples-operator-665b6dd947-vw28j\" (UID: \"630b7825-f758-4b21-ad2a-f08f54b23dfb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vw28j" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727983 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/589608b0-5454-404a-acd7-f164145a1bc0-serving-cert\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.727998 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/589608b0-5454-404a-acd7-f164145a1bc0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.728036 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-audit-policies\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.728067 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/589608b0-5454-404a-acd7-f164145a1bc0-image-import-ca\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.728085 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/589608b0-5454-404a-acd7-f164145a1bc0-config\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.728170 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hsggd"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.728746 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/589608b0-5454-404a-acd7-f164145a1bc0-config\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.729721 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.729784 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-audit-policies\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.730331 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/589608b0-5454-404a-acd7-f164145a1bc0-etcd-serving-ca\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.730387 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.730448 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/589608b0-5454-404a-acd7-f164145a1bc0-audit-dir\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.730972 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.731086 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.731118 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/589608b0-5454-404a-acd7-f164145a1bc0-image-import-ca\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.731871 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e7c1b522-886f-41d2-b7da-a6d4316c3b31-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2wzjv\" (UID: \"e7c1b522-886f-41d2-b7da-a6d4316c3b31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.732444 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.732739 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/745ec1ee-15c4-456b-9e1e-9015e27c4845-config\") pod \"route-controller-manager-6576b87f9c-nv9bs\" (UID: \"745ec1ee-15c4-456b-9e1e-9015e27c4845\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.733223 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.733371 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.733504 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.733871 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/589608b0-5454-404a-acd7-f164145a1bc0-encryption-config\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.734621 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/589608b0-5454-404a-acd7-f164145a1bc0-serving-cert\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.734780 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/745ec1ee-15c4-456b-9e1e-9015e27c4845-client-ca\") pod \"route-controller-manager-6576b87f9c-nv9bs\" (UID: \"745ec1ee-15c4-456b-9e1e-9015e27c4845\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.734843 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/589608b0-5454-404a-acd7-f164145a1bc0-etcd-client\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.734939 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.735334 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e7c1b522-886f-41d2-b7da-a6d4316c3b31-proxy-tls\") pod \"machine-config-controller-84d6567774-2wzjv\" (UID: \"e7c1b522-886f-41d2-b7da-a6d4316c3b31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.736408 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/630b7825-f758-4b21-ad2a-f08f54b23dfb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vw28j\" (UID: \"630b7825-f758-4b21-ad2a-f08f54b23dfb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vw28j" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.737052 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.738553 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/745ec1ee-15c4-456b-9e1e-9015e27c4845-serving-cert\") pod \"route-controller-manager-6576b87f9c-nv9bs\" (UID: \"745ec1ee-15c4-456b-9e1e-9015e27c4845\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.738618 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/589608b0-5454-404a-acd7-f164145a1bc0-audit\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.740553 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cptz2"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.741031 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.742062 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rhxcj"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.742351 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.743222 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4jdfm"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.744149 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.751959 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.754952 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.756516 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6tlvd"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.757902 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-f8kzr"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.759072 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-f8kzr" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.759780 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.760494 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/589608b0-5454-404a-acd7-f164145a1bc0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.761704 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rkhw5"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.763300 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.769973 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.770507 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-fxxfj"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.771391 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fxxfj" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.775609 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlnd8"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.775660 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.775670 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.777510 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.780212 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.781176 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-49b2h"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.782711 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-f8kzr"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.784324 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fxxfj"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.795855 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gzlfj"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.796901 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.797360 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gzlfj"] Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.829556 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.849270 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.869681 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.890445 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.909609 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.929703 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.949387 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 13:46:28 crc kubenswrapper[4914]: I0127 13:46:28.969515 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.016661 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.017063 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.030237 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.050216 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.070377 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.091159 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.110294 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.131021 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.150670 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.170358 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.190744 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.210482 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.231140 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.250439 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.271121 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.291676 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.310481 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.330503 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.350022 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.369233 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.390023 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.411233 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.430495 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.449851 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.469567 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.490572 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.515166 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.529700 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.550236 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.571431 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.590365 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.610468 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.628785 4914 request.go:700] Waited for 1.019661083s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&limit=500&resourceVersion=0 Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.635276 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.650584 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.670624 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.690718 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.710109 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.729610 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.750009 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.770524 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.796998 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.811082 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.829868 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.850474 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.869691 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.892030 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.910141 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.929455 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.949814 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.970763 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 13:46:29 crc kubenswrapper[4914]: I0127 13:46:29.991196 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.010340 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.057737 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnsvl\" (UniqueName: \"kubernetes.io/projected/0074c027-d7a9-4958-81dc-65a378eb8910-kube-api-access-mnsvl\") pod \"machine-api-operator-5694c8668f-z74jx\" (UID: \"0074c027-d7a9-4958-81dc-65a378eb8910\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.065488 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw9dg\" (UniqueName: \"kubernetes.io/projected/84fb8194-82c6-414f-ab5d-4948ce0c48fb-kube-api-access-nw9dg\") pod \"authentication-operator-69f744f599-cmsxp\" (UID: \"84fb8194-82c6-414f-ab5d-4948ce0c48fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.087917 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.090439 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.117707 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.135003 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.150197 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.189351 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgtxf\" (UniqueName: \"kubernetes.io/projected/9899e103-466c-4fb4-887b-916ca5e7ca72-kube-api-access-lgtxf\") pod \"openshift-config-operator-7777fb866f-k9l4r\" (UID: \"9899e103-466c-4fb4-887b-916ca5e7ca72\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.191313 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.212176 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.233549 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.269878 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs6qf\" (UniqueName: \"kubernetes.io/projected/b6bbe5d3-1e4c-4790-9216-6cc5499a2e09-kube-api-access-rs6qf\") pod \"machine-approver-56656f9798-rd557\" (UID: \"b6bbe5d3-1e4c-4790-9216-6cc5499a2e09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.286507 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5fs6\" (UniqueName: \"kubernetes.io/projected/fd8b40eb-b619-4662-b1d1-056c912b7d88-kube-api-access-r5fs6\") pod \"openshift-apiserver-operator-796bbdcf4f-l2wc2\" (UID: \"fd8b40eb-b619-4662-b1d1-056c912b7d88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.306811 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsjfm\" (UniqueName: \"kubernetes.io/projected/90260720-9ce0-4da9-932b-34f7ce235091-kube-api-access-gsjfm\") pod \"console-f9d7485db-rktkr\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.307502 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.311503 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.320473 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" Jan 27 13:46:30 crc kubenswrapper[4914]: W0127 13:46:30.322081 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6bbe5d3_1e4c_4790_9216_6cc5499a2e09.slice/crio-ecb1da6df6a6f9d75716ccd7f340d4ac7177ad2371d6a9a67b0ef085fe6f72ad WatchSource:0}: Error finding container ecb1da6df6a6f9d75716ccd7f340d4ac7177ad2371d6a9a67b0ef085fe6f72ad: Status 404 returned error can't find the container with id ecb1da6df6a6f9d75716ccd7f340d4ac7177ad2371d6a9a67b0ef085fe6f72ad Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.331752 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.350095 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.365312 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-cmsxp"] Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.371176 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.395695 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.415991 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.428550 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfhnc\" (UniqueName: \"kubernetes.io/projected/5bc9d257-6992-48cf-963b-42c22a5dd170-kube-api-access-kfhnc\") pod \"controller-manager-879f6c89f-f8vp8\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.430482 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.454353 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.470252 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.478866 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z74jx"] Jan 27 13:46:30 crc kubenswrapper[4914]: W0127 13:46:30.487932 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0074c027_d7a9_4958_81dc_65a378eb8910.slice/crio-62da9fc92e61b94423fa0f5d2c5df37e8a1799bec84f4b43c27398e7a9a8120d WatchSource:0}: Error finding container 62da9fc92e61b94423fa0f5d2c5df37e8a1799bec84f4b43c27398e7a9a8120d: Status 404 returned error can't find the container with id 62da9fc92e61b94423fa0f5d2c5df37e8a1799bec84f4b43c27398e7a9a8120d Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.494544 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.521525 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6ff4\" (UniqueName: \"kubernetes.io/projected/745ec1ee-15c4-456b-9e1e-9015e27c4845-kube-api-access-p6ff4\") pod \"route-controller-manager-6576b87f9c-nv9bs\" (UID: \"745ec1ee-15c4-456b-9e1e-9015e27c4845\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.534934 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.550601 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.567906 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg7gn\" (UniqueName: \"kubernetes.io/projected/7add3664-f0a1-4575-bc02-ff364cf808b7-kube-api-access-kg7gn\") pod \"downloads-7954f5f757-mwzf8\" (UID: \"7add3664-f0a1-4575-bc02-ff364cf808b7\") " pod="openshift-console/downloads-7954f5f757-mwzf8" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.569467 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfxjl\" (UniqueName: \"kubernetes.io/projected/589608b0-5454-404a-acd7-f164145a1bc0-kube-api-access-zfxjl\") pod \"apiserver-76f77b778f-4jdfm\" (UID: \"589608b0-5454-404a-acd7-f164145a1bc0\") " pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.585535 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb7hm\" (UniqueName: \"kubernetes.io/projected/630b7825-f758-4b21-ad2a-f08f54b23dfb-kube-api-access-gb7hm\") pod \"cluster-samples-operator-665b6dd947-vw28j\" (UID: \"630b7825-f758-4b21-ad2a-f08f54b23dfb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vw28j" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.591958 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.595497 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z48v6\" (UniqueName: \"kubernetes.io/projected/e7c1b522-886f-41d2-b7da-a6d4316c3b31-kube-api-access-z48v6\") pod \"machine-config-controller-84d6567774-2wzjv\" (UID: \"e7c1b522-886f-41d2-b7da-a6d4316c3b31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.606628 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.609403 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6btz\" (UniqueName: \"kubernetes.io/projected/109be131-7cbd-4205-b5c7-eaf7790737f4-kube-api-access-t6btz\") pod \"oauth-openshift-558db77b4-62njv\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.611378 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.631124 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.648381 4914 request.go:700] Waited for 1.889055336s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.652101 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.669790 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r"] Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.672374 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 13:46:30 crc kubenswrapper[4914]: W0127 13:46:30.691528 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9899e103_466c_4fb4_887b_916ca5e7ca72.slice/crio-066eddb0e706db00e9ef1a49107035c632ace4ac82e4c0154be6e0b8485f804a WatchSource:0}: Error finding container 066eddb0e706db00e9ef1a49107035c632ace4ac82e4c0154be6e0b8485f804a: Status 404 returned error can't find the container with id 066eddb0e706db00e9ef1a49107035c632ace4ac82e4c0154be6e0b8485f804a Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.692311 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.699209 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.710743 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.729949 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.734228 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rktkr"] Jan 27 13:46:30 crc kubenswrapper[4914]: W0127 13:46:30.770999 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90260720_9ce0_4da9_932b_34f7ce235091.slice/crio-6d3d95507d8f40d5398fc9321dd2724a7c3f834e8abe92e4b0a48ee6d25bb3d3 WatchSource:0}: Error finding container 6d3d95507d8f40d5398fc9321dd2724a7c3f834e8abe92e4b0a48ee6d25bb3d3: Status 404 returned error can't find the container with id 6d3d95507d8f40d5398fc9321dd2724a7c3f834e8abe92e4b0a48ee6d25bb3d3 Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.777154 4914 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.789922 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.810428 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.844477 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rktkr" event={"ID":"90260720-9ce0-4da9-932b-34f7ce235091","Type":"ContainerStarted","Data":"6d3d95507d8f40d5398fc9321dd2724a7c3f834e8abe92e4b0a48ee6d25bb3d3"} Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.847720 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" event={"ID":"b6bbe5d3-1e4c-4790-9216-6cc5499a2e09","Type":"ContainerStarted","Data":"b6d1fe33f43cc5e8d6438b0b62c040f602095daf759bbd61c2ed7cabbe1224e9"} Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.847775 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" event={"ID":"b6bbe5d3-1e4c-4790-9216-6cc5499a2e09","Type":"ContainerStarted","Data":"ecb1da6df6a6f9d75716ccd7f340d4ac7177ad2371d6a9a67b0ef085fe6f72ad"} Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.848766 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vw28j" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.849070 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4jdfm"] Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.849213 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.850550 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" event={"ID":"84fb8194-82c6-414f-ab5d-4948ce0c48fb","Type":"ContainerStarted","Data":"e27662e0aba6d207b01b2da10ae55723377deb63f5c9d19aa985c64919724e52"} Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.850575 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" event={"ID":"84fb8194-82c6-414f-ab5d-4948ce0c48fb","Type":"ContainerStarted","Data":"ade51ec8c1d9844e54fb3a461d93d4585696f42da4a52706105634fb32104f06"} Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.852541 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" event={"ID":"9899e103-466c-4fb4-887b-916ca5e7ca72","Type":"ContainerStarted","Data":"066eddb0e706db00e9ef1a49107035c632ace4ac82e4c0154be6e0b8485f804a"} Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.855298 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" event={"ID":"0074c027-d7a9-4958-81dc-65a378eb8910","Type":"ContainerStarted","Data":"05a3c0654fa07e1729a1e47b8327516b2df5046047898ba3b6dfdbcb1e4d14d2"} Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.855318 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" event={"ID":"0074c027-d7a9-4958-81dc-65a378eb8910","Type":"ContainerStarted","Data":"62da9fc92e61b94423fa0f5d2c5df37e8a1799bec84f4b43c27398e7a9a8120d"} Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.857031 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwjt4\" (UniqueName: \"kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-kube-api-access-zwjt4\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.857088 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/411381d4-3ea7-4578-b962-f2629f8ba142-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rdxmp\" (UID: \"411381d4-3ea7-4578-b962-f2629f8ba142\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.858069 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f09ceb8-63c3-4f6a-9775-08ca7702b559-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcxd8\" (UID: \"5f09ceb8-63c3-4f6a-9775-08ca7702b559\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.858131 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f09ceb8-63c3-4f6a-9775-08ca7702b559-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcxd8\" (UID: \"5f09ceb8-63c3-4f6a-9775-08ca7702b559\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.858147 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d298bcb-b3cc-4a2d-a963-6930301982e0-metrics-tls\") pod \"dns-operator-744455d44c-b4l6h\" (UID: \"6d298bcb-b3cc-4a2d-a963-6930301982e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-b4l6h" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.858185 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-bound-sa-token\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.858250 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.858272 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2735a1a5-8ace-4c82-94ce-2d81e310612e-trusted-ca\") pod \"console-operator-58897d9998-wnsrs\" (UID: \"2735a1a5-8ace-4c82-94ce-2d81e310612e\") " pod="openshift-console-operator/console-operator-58897d9998-wnsrs" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.858332 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-registry-tls\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.859075 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztf9j\" (UniqueName: \"kubernetes.io/projected/6d298bcb-b3cc-4a2d-a963-6930301982e0-kube-api-access-ztf9j\") pod \"dns-operator-744455d44c-b4l6h\" (UID: \"6d298bcb-b3cc-4a2d-a963-6930301982e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-b4l6h" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.859386 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df3e61ea-86a0-416e-9e24-d90241f6a543-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.859425 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2735a1a5-8ace-4c82-94ce-2d81e310612e-config\") pod \"console-operator-58897d9998-wnsrs\" (UID: \"2735a1a5-8ace-4c82-94ce-2d81e310612e\") " pod="openshift-console-operator/console-operator-58897d9998-wnsrs" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.859447 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrc6t\" (UniqueName: \"kubernetes.io/projected/5f09ceb8-63c3-4f6a-9775-08ca7702b559-kube-api-access-lrc6t\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcxd8\" (UID: \"5f09ceb8-63c3-4f6a-9775-08ca7702b559\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.859612 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/411381d4-3ea7-4578-b962-f2629f8ba142-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rdxmp\" (UID: \"411381d4-3ea7-4578-b962-f2629f8ba142\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" Jan 27 13:46:30 crc kubenswrapper[4914]: E0127 13:46:30.859699 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:31.359687278 +0000 UTC m=+149.672037363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.859808 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df3e61ea-86a0-416e-9e24-d90241f6a543-registry-certificates\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.859844 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jxhh\" (UniqueName: \"kubernetes.io/projected/411381d4-3ea7-4578-b962-f2629f8ba142-kube-api-access-9jxhh\") pod \"cluster-image-registry-operator-dc59b4c8b-rdxmp\" (UID: \"411381d4-3ea7-4578-b962-f2629f8ba142\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.859905 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df3e61ea-86a0-416e-9e24-d90241f6a543-trusted-ca\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.859920 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2735a1a5-8ace-4c82-94ce-2d81e310612e-serving-cert\") pod \"console-operator-58897d9998-wnsrs\" (UID: \"2735a1a5-8ace-4c82-94ce-2d81e310612e\") " pod="openshift-console-operator/console-operator-58897d9998-wnsrs" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.859963 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df3e61ea-86a0-416e-9e24-d90241f6a543-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.859980 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/411381d4-3ea7-4578-b962-f2629f8ba142-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rdxmp\" (UID: \"411381d4-3ea7-4578-b962-f2629f8ba142\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.860061 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc8h6\" (UniqueName: \"kubernetes.io/projected/2735a1a5-8ace-4c82-94ce-2d81e310612e-kube-api-access-cc8h6\") pod \"console-operator-58897d9998-wnsrs\" (UID: \"2735a1a5-8ace-4c82-94ce-2d81e310612e\") " pod="openshift-console-operator/console-operator-58897d9998-wnsrs" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.860222 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv"] Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.861177 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-477bn\" (UniqueName: \"kubernetes.io/projected/543ed275-142d-4301-a0c2-33a99233ee0d-kube-api-access-477bn\") pod \"apiserver-7bbb656c7d-2lg87\" (UID: \"543ed275-142d-4301-a0c2-33a99233ee0d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.864995 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mwzf8" Jan 27 13:46:30 crc kubenswrapper[4914]: W0127 13:46:30.876151 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod589608b0_5454_404a_acd7_f164145a1bc0.slice/crio-abe8dce7cd6b60ea963e6deecaf835a010c80c95ae665789992a78782a88ef2c WatchSource:0}: Error finding container abe8dce7cd6b60ea963e6deecaf835a010c80c95ae665789992a78782a88ef2c: Status 404 returned error can't find the container with id abe8dce7cd6b60ea963e6deecaf835a010c80c95ae665789992a78782a88ef2c Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.882925 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.893268 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.923189 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-f8vp8"] Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962203 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962619 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2735a1a5-8ace-4c82-94ce-2d81e310612e-trusted-ca\") pod \"console-operator-58897d9998-wnsrs\" (UID: \"2735a1a5-8ace-4c82-94ce-2d81e310612e\") " pod="openshift-console-operator/console-operator-58897d9998-wnsrs" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962641 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b6ea188-6622-4aaa-b9a1-209ff123514f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jnmjw\" (UID: \"8b6ea188-6622-4aaa-b9a1-209ff123514f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962668 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b8b420d-d475-4d19-84b8-113facbfcf09-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9mj7d\" (UID: \"9b8b420d-d475-4d19-84b8-113facbfcf09\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962683 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c2ae126b-c993-4138-b9a6-dc7a99e9f69e-certs\") pod \"machine-config-server-8skmg\" (UID: \"c2ae126b-c993-4138-b9a6-dc7a99e9f69e\") " pod="openshift-machine-config-operator/machine-config-server-8skmg" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962699 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-registry-tls\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962732 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2118bb9-5519-4158-9acc-e9e434d6da7d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mbz96\" (UID: \"d2118bb9-5519-4158-9acc-e9e434d6da7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962758 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7656576b-aeae-4b15-b2ab-18658770a1e5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w75kk\" (UID: \"7656576b-aeae-4b15-b2ab-18658770a1e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962775 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02a14aa-070b-42e4-a44b-bb6bd50e03b1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m9vdw\" (UID: \"a02a14aa-070b-42e4-a44b-bb6bd50e03b1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962815 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6295fb40-33f8-4b77-8c3e-d36037efa07e-stats-auth\") pod \"router-default-5444994796-vm26v\" (UID: \"6295fb40-33f8-4b77-8c3e-d36037efa07e\") " pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962872 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b8b420d-d475-4d19-84b8-113facbfcf09-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9mj7d\" (UID: \"9b8b420d-d475-4d19-84b8-113facbfcf09\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962890 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/acfa8e86-e574-4d67-91f7-35a45e1956ee-etcd-service-ca\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962913 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a1a517ef-fa49-4652-9039-30ef3b824353-metrics-tls\") pod \"ingress-operator-5b745b69d9-jsk4m\" (UID: \"a1a517ef-fa49-4652-9039-30ef3b824353\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962939 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2735a1a5-8ace-4c82-94ce-2d81e310612e-config\") pod \"console-operator-58897d9998-wnsrs\" (UID: \"2735a1a5-8ace-4c82-94ce-2d81e310612e\") " pod="openshift-console-operator/console-operator-58897d9998-wnsrs" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962953 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1a517ef-fa49-4652-9039-30ef3b824353-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jsk4m\" (UID: \"a1a517ef-fa49-4652-9039-30ef3b824353\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962980 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrc6t\" (UniqueName: \"kubernetes.io/projected/5f09ceb8-63c3-4f6a-9775-08ca7702b559-kube-api-access-lrc6t\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcxd8\" (UID: \"5f09ceb8-63c3-4f6a-9775-08ca7702b559\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.962996 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm2rb\" (UniqueName: \"kubernetes.io/projected/2ab4b4e4-547c-4d23-bdb4-fc7f22902419-kube-api-access-zm2rb\") pod \"service-ca-9c57cc56f-49b2h\" (UID: \"2ab4b4e4-547c-4d23-bdb4-fc7f22902419\") " pod="openshift-service-ca/service-ca-9c57cc56f-49b2h" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963011 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acfa8e86-e574-4d67-91f7-35a45e1956ee-config\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963028 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3e4b037-3ec1-4d87-9434-1717d360ab61-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7dd65\" (UID: \"b3e4b037-3ec1-4d87-9434-1717d360ab61\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963052 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-config-volume\") pod \"collect-profiles-29492025-hbn26\" (UID: \"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963076 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jxhh\" (UniqueName: \"kubernetes.io/projected/411381d4-3ea7-4578-b962-f2629f8ba142-kube-api-access-9jxhh\") pod \"cluster-image-registry-operator-dc59b4c8b-rdxmp\" (UID: \"411381d4-3ea7-4578-b962-f2629f8ba142\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963093 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96dfm\" (UniqueName: \"kubernetes.io/projected/9b8b420d-d475-4d19-84b8-113facbfcf09-kube-api-access-96dfm\") pod \"kube-storage-version-migrator-operator-b67b599dd-9mj7d\" (UID: \"9b8b420d-d475-4d19-84b8-113facbfcf09\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963129 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df3e61ea-86a0-416e-9e24-d90241f6a543-trusted-ca\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963146 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmbh2\" (UniqueName: \"kubernetes.io/projected/b3e4b037-3ec1-4d87-9434-1717d360ab61-kube-api-access-kmbh2\") pod \"package-server-manager-789f6589d5-7dd65\" (UID: \"b3e4b037-3ec1-4d87-9434-1717d360ab61\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963173 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1e9e06d9-6682-4f7f-a4a0-36414213490b-socket-dir\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963189 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h87q6\" (UniqueName: \"kubernetes.io/projected/42b6b062-22d8-4a87-a0d1-8c24ec3e9637-kube-api-access-h87q6\") pod \"dns-default-fxxfj\" (UID: \"42b6b062-22d8-4a87-a0d1-8c24ec3e9637\") " pod="openshift-dns/dns-default-fxxfj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963202 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4b872ac-004a-43d1-b1d1-a12f2ac2f3f4-cert\") pod \"ingress-canary-f8kzr\" (UID: \"b4b872ac-004a-43d1-b1d1-a12f2ac2f3f4\") " pod="openshift-ingress-canary/ingress-canary-f8kzr" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963217 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/558eb6ca-2441-4818-8d65-5323c39328c2-config\") pod \"service-ca-operator-777779d784-hsggd\" (UID: \"558eb6ca-2441-4818-8d65-5323c39328c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hsggd" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963248 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr2dg\" (UniqueName: \"kubernetes.io/projected/8c4742c3-5115-49f4-85ed-a3eda8373114-kube-api-access-mr2dg\") pod \"migrator-59844c95c7-cptz2\" (UID: \"8c4742c3-5115-49f4-85ed-a3eda8373114\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cptz2" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963282 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdwjd\" (UniqueName: \"kubernetes.io/projected/558eb6ca-2441-4818-8d65-5323c39328c2-kube-api-access-hdwjd\") pod \"service-ca-operator-777779d784-hsggd\" (UID: \"558eb6ca-2441-4818-8d65-5323c39328c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hsggd" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963307 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwjt4\" (UniqueName: \"kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-kube-api-access-zwjt4\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963321 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b6ea188-6622-4aaa-b9a1-209ff123514f-srv-cert\") pod \"olm-operator-6b444d44fb-jnmjw\" (UID: \"8b6ea188-6622-4aaa-b9a1-209ff123514f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963339 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qm7f\" (UniqueName: \"kubernetes.io/projected/cb849428-e713-45bf-b3d4-ddd350825372-kube-api-access-8qm7f\") pod \"multus-admission-controller-857f4d67dd-rkhw5\" (UID: \"cb849428-e713-45bf-b3d4-ddd350825372\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rkhw5" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963363 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klc67\" (UniqueName: \"kubernetes.io/projected/08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6-kube-api-access-klc67\") pod \"catalog-operator-68c6474976-ljqd4\" (UID: \"08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963395 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c2ae126b-c993-4138-b9a6-dc7a99e9f69e-node-bootstrap-token\") pod \"machine-config-server-8skmg\" (UID: \"c2ae126b-c993-4138-b9a6-dc7a99e9f69e\") " pod="openshift-machine-config-operator/machine-config-server-8skmg" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963410 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1e9e06d9-6682-4f7f-a4a0-36414213490b-csi-data-dir\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963424 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5n67\" (UniqueName: \"kubernetes.io/projected/628bfff0-2254-4c7c-a8a4-01b2288d8535-kube-api-access-c5n67\") pod \"control-plane-machine-set-operator-78cbb6b69f-mlnd8\" (UID: \"628bfff0-2254-4c7c-a8a4-01b2288d8535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlnd8" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963448 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02a14aa-070b-42e4-a44b-bb6bd50e03b1-config\") pod \"kube-controller-manager-operator-78b949d7b-m9vdw\" (UID: \"a02a14aa-070b-42e4-a44b-bb6bd50e03b1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963472 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2ab4b4e4-547c-4d23-bdb4-fc7f22902419-signing-cabundle\") pod \"service-ca-9c57cc56f-49b2h\" (UID: \"2ab4b4e4-547c-4d23-bdb4-fc7f22902419\") " pod="openshift-service-ca/service-ca-9c57cc56f-49b2h" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963485 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6-profile-collector-cert\") pod \"catalog-operator-68c6474976-ljqd4\" (UID: \"08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963498 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/acfa8e86-e574-4d67-91f7-35a45e1956ee-etcd-ca\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963538 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6038ea14-7eab-4587-942f-acba6a8a100a-proxy-tls\") pod \"machine-config-operator-74547568cd-8tqf2\" (UID: \"6038ea14-7eab-4587-942f-acba6a8a100a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963554 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acfa8e86-e574-4d67-91f7-35a45e1956ee-serving-cert\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963568 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfl9x\" (UniqueName: \"kubernetes.io/projected/7656576b-aeae-4b15-b2ab-18658770a1e5-kube-api-access-dfl9x\") pod \"marketplace-operator-79b997595-w75kk\" (UID: \"7656576b-aeae-4b15-b2ab-18658770a1e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963603 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-bound-sa-token\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963617 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1e9e06d9-6682-4f7f-a4a0-36414213490b-mountpoint-dir\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963641 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/558eb6ca-2441-4818-8d65-5323c39328c2-serving-cert\") pod \"service-ca-operator-777779d784-hsggd\" (UID: \"558eb6ca-2441-4818-8d65-5323c39328c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hsggd" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963655 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bfg8\" (UniqueName: \"kubernetes.io/projected/c2ae126b-c993-4138-b9a6-dc7a99e9f69e-kube-api-access-9bfg8\") pod \"machine-config-server-8skmg\" (UID: \"c2ae126b-c993-4138-b9a6-dc7a99e9f69e\") " pod="openshift-machine-config-operator/machine-config-server-8skmg" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963688 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2ab4b4e4-547c-4d23-bdb4-fc7f22902419-signing-key\") pod \"service-ca-9c57cc56f-49b2h\" (UID: \"2ab4b4e4-547c-4d23-bdb4-fc7f22902419\") " pod="openshift-service-ca/service-ca-9c57cc56f-49b2h" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963705 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97635386-64c1-4f78-88f2-17faa845b6b2-webhook-cert\") pod \"packageserver-d55dfcdfc-ssxkp\" (UID: \"97635386-64c1-4f78-88f2-17faa845b6b2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963739 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02a14aa-070b-42e4-a44b-bb6bd50e03b1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m9vdw\" (UID: \"a02a14aa-070b-42e4-a44b-bb6bd50e03b1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963758 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1e9e06d9-6682-4f7f-a4a0-36414213490b-registration-dir\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963784 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6-srv-cert\") pod \"catalog-operator-68c6474976-ljqd4\" (UID: \"08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963802 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztf9j\" (UniqueName: \"kubernetes.io/projected/6d298bcb-b3cc-4a2d-a963-6930301982e0-kube-api-access-ztf9j\") pod \"dns-operator-744455d44c-b4l6h\" (UID: \"6d298bcb-b3cc-4a2d-a963-6930301982e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-b4l6h" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963824 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6295fb40-33f8-4b77-8c3e-d36037efa07e-default-certificate\") pod \"router-default-5444994796-vm26v\" (UID: \"6295fb40-33f8-4b77-8c3e-d36037efa07e\") " pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963854 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6295fb40-33f8-4b77-8c3e-d36037efa07e-service-ca-bundle\") pod \"router-default-5444994796-vm26v\" (UID: \"6295fb40-33f8-4b77-8c3e-d36037efa07e\") " pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963868 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7656576b-aeae-4b15-b2ab-18658770a1e5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w75kk\" (UID: \"7656576b-aeae-4b15-b2ab-18658770a1e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963895 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df3e61ea-86a0-416e-9e24-d90241f6a543-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963911 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2106d7e3-6e73-4305-aa0b-e96af20f2a18-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k2jgl\" (UID: \"2106d7e3-6e73-4305-aa0b-e96af20f2a18\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963927 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2106d7e3-6e73-4305-aa0b-e96af20f2a18-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k2jgl\" (UID: \"2106d7e3-6e73-4305-aa0b-e96af20f2a18\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963942 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nztw\" (UniqueName: \"kubernetes.io/projected/acfa8e86-e574-4d67-91f7-35a45e1956ee-kube-api-access-4nztw\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963955 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1a517ef-fa49-4652-9039-30ef3b824353-trusted-ca\") pod \"ingress-operator-5b745b69d9-jsk4m\" (UID: \"a1a517ef-fa49-4652-9039-30ef3b824353\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963971 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-secret-volume\") pod \"collect-profiles-29492025-hbn26\" (UID: \"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.963986 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97635386-64c1-4f78-88f2-17faa845b6b2-apiservice-cert\") pod \"packageserver-d55dfcdfc-ssxkp\" (UID: \"97635386-64c1-4f78-88f2-17faa845b6b2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964027 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6krk\" (UniqueName: \"kubernetes.io/projected/a1a517ef-fa49-4652-9039-30ef3b824353-kube-api-access-k6krk\") pod \"ingress-operator-5b745b69d9-jsk4m\" (UID: \"a1a517ef-fa49-4652-9039-30ef3b824353\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964042 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2118bb9-5519-4158-9acc-e9e434d6da7d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mbz96\" (UID: \"d2118bb9-5519-4158-9acc-e9e434d6da7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964056 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwjj5\" (UniqueName: \"kubernetes.io/projected/6038ea14-7eab-4587-942f-acba6a8a100a-kube-api-access-kwjj5\") pod \"machine-config-operator-74547568cd-8tqf2\" (UID: \"6038ea14-7eab-4587-942f-acba6a8a100a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964069 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1e9e06d9-6682-4f7f-a4a0-36414213490b-plugins-dir\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964087 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/411381d4-3ea7-4578-b962-f2629f8ba142-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rdxmp\" (UID: \"411381d4-3ea7-4578-b962-f2629f8ba142\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964103 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9br86\" (UniqueName: \"kubernetes.io/projected/8b6ea188-6622-4aaa-b9a1-209ff123514f-kube-api-access-9br86\") pod \"olm-operator-6b444d44fb-jnmjw\" (UID: \"8b6ea188-6622-4aaa-b9a1-209ff123514f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964121 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2106d7e3-6e73-4305-aa0b-e96af20f2a18-config\") pod \"kube-apiserver-operator-766d6c64bb-k2jgl\" (UID: \"2106d7e3-6e73-4305-aa0b-e96af20f2a18\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964141 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df3e61ea-86a0-416e-9e24-d90241f6a543-registry-certificates\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964185 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42b6b062-22d8-4a87-a0d1-8c24ec3e9637-config-volume\") pod \"dns-default-fxxfj\" (UID: \"42b6b062-22d8-4a87-a0d1-8c24ec3e9637\") " pod="openshift-dns/dns-default-fxxfj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964229 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2735a1a5-8ace-4c82-94ce-2d81e310612e-serving-cert\") pod \"console-operator-58897d9998-wnsrs\" (UID: \"2735a1a5-8ace-4c82-94ce-2d81e310612e\") " pod="openshift-console-operator/console-operator-58897d9998-wnsrs" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964251 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df3e61ea-86a0-416e-9e24-d90241f6a543-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964271 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/411381d4-3ea7-4578-b962-f2629f8ba142-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rdxmp\" (UID: \"411381d4-3ea7-4578-b962-f2629f8ba142\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964292 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddj7j\" (UniqueName: \"kubernetes.io/projected/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-kube-api-access-ddj7j\") pod \"collect-profiles-29492025-hbn26\" (UID: \"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964342 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5z2k\" (UniqueName: \"kubernetes.io/projected/97635386-64c1-4f78-88f2-17faa845b6b2-kube-api-access-b5z2k\") pod \"packageserver-d55dfcdfc-ssxkp\" (UID: \"97635386-64c1-4f78-88f2-17faa845b6b2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964357 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/acfa8e86-e574-4d67-91f7-35a45e1956ee-etcd-client\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964382 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb797\" (UniqueName: \"kubernetes.io/projected/6295fb40-33f8-4b77-8c3e-d36037efa07e-kube-api-access-cb797\") pod \"router-default-5444994796-vm26v\" (UID: \"6295fb40-33f8-4b77-8c3e-d36037efa07e\") " pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964405 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/628bfff0-2254-4c7c-a8a4-01b2288d8535-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mlnd8\" (UID: \"628bfff0-2254-4c7c-a8a4-01b2288d8535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlnd8" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964427 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc8h6\" (UniqueName: \"kubernetes.io/projected/2735a1a5-8ace-4c82-94ce-2d81e310612e-kube-api-access-cc8h6\") pod \"console-operator-58897d9998-wnsrs\" (UID: \"2735a1a5-8ace-4c82-94ce-2d81e310612e\") " pod="openshift-console-operator/console-operator-58897d9998-wnsrs" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964448 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/411381d4-3ea7-4578-b962-f2629f8ba142-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rdxmp\" (UID: \"411381d4-3ea7-4578-b962-f2629f8ba142\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964468 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6295fb40-33f8-4b77-8c3e-d36037efa07e-metrics-certs\") pod \"router-default-5444994796-vm26v\" (UID: \"6295fb40-33f8-4b77-8c3e-d36037efa07e\") " pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964486 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2118bb9-5519-4158-9acc-e9e434d6da7d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mbz96\" (UID: \"d2118bb9-5519-4158-9acc-e9e434d6da7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964505 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84wvc\" (UniqueName: \"kubernetes.io/projected/b4b872ac-004a-43d1-b1d1-a12f2ac2f3f4-kube-api-access-84wvc\") pod \"ingress-canary-f8kzr\" (UID: \"b4b872ac-004a-43d1-b1d1-a12f2ac2f3f4\") " pod="openshift-ingress-canary/ingress-canary-f8kzr" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964526 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6038ea14-7eab-4587-942f-acba6a8a100a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8tqf2\" (UID: \"6038ea14-7eab-4587-942f-acba6a8a100a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964548 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f09ceb8-63c3-4f6a-9775-08ca7702b559-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcxd8\" (UID: \"5f09ceb8-63c3-4f6a-9775-08ca7702b559\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964571 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cb849428-e713-45bf-b3d4-ddd350825372-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rkhw5\" (UID: \"cb849428-e713-45bf-b3d4-ddd350825372\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rkhw5" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964592 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97635386-64c1-4f78-88f2-17faa845b6b2-tmpfs\") pod \"packageserver-d55dfcdfc-ssxkp\" (UID: \"97635386-64c1-4f78-88f2-17faa845b6b2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964620 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ncjj\" (UniqueName: \"kubernetes.io/projected/1e9e06d9-6682-4f7f-a4a0-36414213490b-kube-api-access-8ncjj\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964645 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f09ceb8-63c3-4f6a-9775-08ca7702b559-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcxd8\" (UID: \"5f09ceb8-63c3-4f6a-9775-08ca7702b559\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964660 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d298bcb-b3cc-4a2d-a963-6930301982e0-metrics-tls\") pod \"dns-operator-744455d44c-b4l6h\" (UID: \"6d298bcb-b3cc-4a2d-a963-6930301982e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-b4l6h" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964677 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/42b6b062-22d8-4a87-a0d1-8c24ec3e9637-metrics-tls\") pod \"dns-default-fxxfj\" (UID: \"42b6b062-22d8-4a87-a0d1-8c24ec3e9637\") " pod="openshift-dns/dns-default-fxxfj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.964694 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6038ea14-7eab-4587-942f-acba6a8a100a-images\") pod \"machine-config-operator-74547568cd-8tqf2\" (UID: \"6038ea14-7eab-4587-942f-acba6a8a100a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" Jan 27 13:46:30 crc kubenswrapper[4914]: E0127 13:46:30.964819 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:31.464804378 +0000 UTC m=+149.777154463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.974059 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df3e61ea-86a0-416e-9e24-d90241f6a543-registry-certificates\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.974771 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2735a1a5-8ace-4c82-94ce-2d81e310612e-trusted-ca\") pod \"console-operator-58897d9998-wnsrs\" (UID: \"2735a1a5-8ace-4c82-94ce-2d81e310612e\") " pod="openshift-console-operator/console-operator-58897d9998-wnsrs" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.974913 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2735a1a5-8ace-4c82-94ce-2d81e310612e-config\") pod \"console-operator-58897d9998-wnsrs\" (UID: \"2735a1a5-8ace-4c82-94ce-2d81e310612e\") " pod="openshift-console-operator/console-operator-58897d9998-wnsrs" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.988939 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d298bcb-b3cc-4a2d-a963-6930301982e0-metrics-tls\") pod \"dns-operator-744455d44c-b4l6h\" (UID: \"6d298bcb-b3cc-4a2d-a963-6930301982e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-b4l6h" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.990717 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f09ceb8-63c3-4f6a-9775-08ca7702b559-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcxd8\" (UID: \"5f09ceb8-63c3-4f6a-9775-08ca7702b559\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.993714 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2735a1a5-8ace-4c82-94ce-2d81e310612e-serving-cert\") pod \"console-operator-58897d9998-wnsrs\" (UID: \"2735a1a5-8ace-4c82-94ce-2d81e310612e\") " pod="openshift-console-operator/console-operator-58897d9998-wnsrs" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.995991 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df3e61ea-86a0-416e-9e24-d90241f6a543-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:30 crc kubenswrapper[4914]: I0127 13:46:30.998686 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df3e61ea-86a0-416e-9e24-d90241f6a543-trusted-ca\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.001203 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f09ceb8-63c3-4f6a-9775-08ca7702b559-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcxd8\" (UID: \"5f09ceb8-63c3-4f6a-9775-08ca7702b559\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.002824 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df3e61ea-86a0-416e-9e24-d90241f6a543-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.004764 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/411381d4-3ea7-4578-b962-f2629f8ba142-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rdxmp\" (UID: \"411381d4-3ea7-4578-b962-f2629f8ba142\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.009568 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2"] Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.009674 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/411381d4-3ea7-4578-b962-f2629f8ba142-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rdxmp\" (UID: \"411381d4-3ea7-4578-b962-f2629f8ba142\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.012278 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-registry-tls\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.014572 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrc6t\" (UniqueName: \"kubernetes.io/projected/5f09ceb8-63c3-4f6a-9775-08ca7702b559-kube-api-access-lrc6t\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcxd8\" (UID: \"5f09ceb8-63c3-4f6a-9775-08ca7702b559\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.014706 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs"] Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.027606 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jxhh\" (UniqueName: \"kubernetes.io/projected/411381d4-3ea7-4578-b962-f2629f8ba142-kube-api-access-9jxhh\") pod \"cluster-image-registry-operator-dc59b4c8b-rdxmp\" (UID: \"411381d4-3ea7-4578-b962-f2629f8ba142\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.055371 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-bound-sa-token\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066414 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2118bb9-5519-4158-9acc-e9e434d6da7d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mbz96\" (UID: \"d2118bb9-5519-4158-9acc-e9e434d6da7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066447 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84wvc\" (UniqueName: \"kubernetes.io/projected/b4b872ac-004a-43d1-b1d1-a12f2ac2f3f4-kube-api-access-84wvc\") pod \"ingress-canary-f8kzr\" (UID: \"b4b872ac-004a-43d1-b1d1-a12f2ac2f3f4\") " pod="openshift-ingress-canary/ingress-canary-f8kzr" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066467 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6295fb40-33f8-4b77-8c3e-d36037efa07e-metrics-certs\") pod \"router-default-5444994796-vm26v\" (UID: \"6295fb40-33f8-4b77-8c3e-d36037efa07e\") " pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066488 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6038ea14-7eab-4587-942f-acba6a8a100a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8tqf2\" (UID: \"6038ea14-7eab-4587-942f-acba6a8a100a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066511 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cb849428-e713-45bf-b3d4-ddd350825372-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rkhw5\" (UID: \"cb849428-e713-45bf-b3d4-ddd350825372\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rkhw5" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066534 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97635386-64c1-4f78-88f2-17faa845b6b2-tmpfs\") pod \"packageserver-d55dfcdfc-ssxkp\" (UID: \"97635386-64c1-4f78-88f2-17faa845b6b2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066553 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ncjj\" (UniqueName: \"kubernetes.io/projected/1e9e06d9-6682-4f7f-a4a0-36414213490b-kube-api-access-8ncjj\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066567 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/42b6b062-22d8-4a87-a0d1-8c24ec3e9637-metrics-tls\") pod \"dns-default-fxxfj\" (UID: \"42b6b062-22d8-4a87-a0d1-8c24ec3e9637\") " pod="openshift-dns/dns-default-fxxfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066582 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6038ea14-7eab-4587-942f-acba6a8a100a-images\") pod \"machine-config-operator-74547568cd-8tqf2\" (UID: \"6038ea14-7eab-4587-942f-acba6a8a100a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066598 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b6ea188-6622-4aaa-b9a1-209ff123514f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jnmjw\" (UID: \"8b6ea188-6622-4aaa-b9a1-209ff123514f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066616 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066633 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b8b420d-d475-4d19-84b8-113facbfcf09-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9mj7d\" (UID: \"9b8b420d-d475-4d19-84b8-113facbfcf09\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066648 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c2ae126b-c993-4138-b9a6-dc7a99e9f69e-certs\") pod \"machine-config-server-8skmg\" (UID: \"c2ae126b-c993-4138-b9a6-dc7a99e9f69e\") " pod="openshift-machine-config-operator/machine-config-server-8skmg" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066663 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7656576b-aeae-4b15-b2ab-18658770a1e5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w75kk\" (UID: \"7656576b-aeae-4b15-b2ab-18658770a1e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066680 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02a14aa-070b-42e4-a44b-bb6bd50e03b1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m9vdw\" (UID: \"a02a14aa-070b-42e4-a44b-bb6bd50e03b1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066705 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2118bb9-5519-4158-9acc-e9e434d6da7d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mbz96\" (UID: \"d2118bb9-5519-4158-9acc-e9e434d6da7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066727 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6295fb40-33f8-4b77-8c3e-d36037efa07e-stats-auth\") pod \"router-default-5444994796-vm26v\" (UID: \"6295fb40-33f8-4b77-8c3e-d36037efa07e\") " pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066742 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b8b420d-d475-4d19-84b8-113facbfcf09-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9mj7d\" (UID: \"9b8b420d-d475-4d19-84b8-113facbfcf09\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066757 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/acfa8e86-e574-4d67-91f7-35a45e1956ee-etcd-service-ca\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066777 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a1a517ef-fa49-4652-9039-30ef3b824353-metrics-tls\") pod \"ingress-operator-5b745b69d9-jsk4m\" (UID: \"a1a517ef-fa49-4652-9039-30ef3b824353\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066799 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1a517ef-fa49-4652-9039-30ef3b824353-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jsk4m\" (UID: \"a1a517ef-fa49-4652-9039-30ef3b824353\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066821 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm2rb\" (UniqueName: \"kubernetes.io/projected/2ab4b4e4-547c-4d23-bdb4-fc7f22902419-kube-api-access-zm2rb\") pod \"service-ca-9c57cc56f-49b2h\" (UID: \"2ab4b4e4-547c-4d23-bdb4-fc7f22902419\") " pod="openshift-service-ca/service-ca-9c57cc56f-49b2h" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066867 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acfa8e86-e574-4d67-91f7-35a45e1956ee-config\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066885 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3e4b037-3ec1-4d87-9434-1717d360ab61-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7dd65\" (UID: \"b3e4b037-3ec1-4d87-9434-1717d360ab61\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066901 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-config-volume\") pod \"collect-profiles-29492025-hbn26\" (UID: \"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066916 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96dfm\" (UniqueName: \"kubernetes.io/projected/9b8b420d-d475-4d19-84b8-113facbfcf09-kube-api-access-96dfm\") pod \"kube-storage-version-migrator-operator-b67b599dd-9mj7d\" (UID: \"9b8b420d-d475-4d19-84b8-113facbfcf09\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066933 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmbh2\" (UniqueName: \"kubernetes.io/projected/b3e4b037-3ec1-4d87-9434-1717d360ab61-kube-api-access-kmbh2\") pod \"package-server-manager-789f6589d5-7dd65\" (UID: \"b3e4b037-3ec1-4d87-9434-1717d360ab61\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066949 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1e9e06d9-6682-4f7f-a4a0-36414213490b-socket-dir\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066964 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/558eb6ca-2441-4818-8d65-5323c39328c2-config\") pod \"service-ca-operator-777779d784-hsggd\" (UID: \"558eb6ca-2441-4818-8d65-5323c39328c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hsggd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.066987 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr2dg\" (UniqueName: \"kubernetes.io/projected/8c4742c3-5115-49f4-85ed-a3eda8373114-kube-api-access-mr2dg\") pod \"migrator-59844c95c7-cptz2\" (UID: \"8c4742c3-5115-49f4-85ed-a3eda8373114\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cptz2" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067002 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h87q6\" (UniqueName: \"kubernetes.io/projected/42b6b062-22d8-4a87-a0d1-8c24ec3e9637-kube-api-access-h87q6\") pod \"dns-default-fxxfj\" (UID: \"42b6b062-22d8-4a87-a0d1-8c24ec3e9637\") " pod="openshift-dns/dns-default-fxxfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067017 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4b872ac-004a-43d1-b1d1-a12f2ac2f3f4-cert\") pod \"ingress-canary-f8kzr\" (UID: \"b4b872ac-004a-43d1-b1d1-a12f2ac2f3f4\") " pod="openshift-ingress-canary/ingress-canary-f8kzr" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067034 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdwjd\" (UniqueName: \"kubernetes.io/projected/558eb6ca-2441-4818-8d65-5323c39328c2-kube-api-access-hdwjd\") pod \"service-ca-operator-777779d784-hsggd\" (UID: \"558eb6ca-2441-4818-8d65-5323c39328c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hsggd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067064 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qm7f\" (UniqueName: \"kubernetes.io/projected/cb849428-e713-45bf-b3d4-ddd350825372-kube-api-access-8qm7f\") pod \"multus-admission-controller-857f4d67dd-rkhw5\" (UID: \"cb849428-e713-45bf-b3d4-ddd350825372\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rkhw5" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067085 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b6ea188-6622-4aaa-b9a1-209ff123514f-srv-cert\") pod \"olm-operator-6b444d44fb-jnmjw\" (UID: \"8b6ea188-6622-4aaa-b9a1-209ff123514f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067100 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klc67\" (UniqueName: \"kubernetes.io/projected/08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6-kube-api-access-klc67\") pod \"catalog-operator-68c6474976-ljqd4\" (UID: \"08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067118 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c2ae126b-c993-4138-b9a6-dc7a99e9f69e-node-bootstrap-token\") pod \"machine-config-server-8skmg\" (UID: \"c2ae126b-c993-4138-b9a6-dc7a99e9f69e\") " pod="openshift-machine-config-operator/machine-config-server-8skmg" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067134 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1e9e06d9-6682-4f7f-a4a0-36414213490b-csi-data-dir\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067149 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02a14aa-070b-42e4-a44b-bb6bd50e03b1-config\") pod \"kube-controller-manager-operator-78b949d7b-m9vdw\" (UID: \"a02a14aa-070b-42e4-a44b-bb6bd50e03b1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067166 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5n67\" (UniqueName: \"kubernetes.io/projected/628bfff0-2254-4c7c-a8a4-01b2288d8535-kube-api-access-c5n67\") pod \"control-plane-machine-set-operator-78cbb6b69f-mlnd8\" (UID: \"628bfff0-2254-4c7c-a8a4-01b2288d8535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlnd8" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067182 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6-profile-collector-cert\") pod \"catalog-operator-68c6474976-ljqd4\" (UID: \"08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067199 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/acfa8e86-e574-4d67-91f7-35a45e1956ee-etcd-ca\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067214 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2ab4b4e4-547c-4d23-bdb4-fc7f22902419-signing-cabundle\") pod \"service-ca-9c57cc56f-49b2h\" (UID: \"2ab4b4e4-547c-4d23-bdb4-fc7f22902419\") " pod="openshift-service-ca/service-ca-9c57cc56f-49b2h" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067228 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acfa8e86-e574-4d67-91f7-35a45e1956ee-serving-cert\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067243 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfl9x\" (UniqueName: \"kubernetes.io/projected/7656576b-aeae-4b15-b2ab-18658770a1e5-kube-api-access-dfl9x\") pod \"marketplace-operator-79b997595-w75kk\" (UID: \"7656576b-aeae-4b15-b2ab-18658770a1e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067258 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6038ea14-7eab-4587-942f-acba6a8a100a-proxy-tls\") pod \"machine-config-operator-74547568cd-8tqf2\" (UID: \"6038ea14-7eab-4587-942f-acba6a8a100a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067274 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/558eb6ca-2441-4818-8d65-5323c39328c2-serving-cert\") pod \"service-ca-operator-777779d784-hsggd\" (UID: \"558eb6ca-2441-4818-8d65-5323c39328c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hsggd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067287 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1e9e06d9-6682-4f7f-a4a0-36414213490b-mountpoint-dir\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067303 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2ab4b4e4-547c-4d23-bdb4-fc7f22902419-signing-key\") pod \"service-ca-9c57cc56f-49b2h\" (UID: \"2ab4b4e4-547c-4d23-bdb4-fc7f22902419\") " pod="openshift-service-ca/service-ca-9c57cc56f-49b2h" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067320 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97635386-64c1-4f78-88f2-17faa845b6b2-webhook-cert\") pod \"packageserver-d55dfcdfc-ssxkp\" (UID: \"97635386-64c1-4f78-88f2-17faa845b6b2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067336 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bfg8\" (UniqueName: \"kubernetes.io/projected/c2ae126b-c993-4138-b9a6-dc7a99e9f69e-kube-api-access-9bfg8\") pod \"machine-config-server-8skmg\" (UID: \"c2ae126b-c993-4138-b9a6-dc7a99e9f69e\") " pod="openshift-machine-config-operator/machine-config-server-8skmg" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067353 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02a14aa-070b-42e4-a44b-bb6bd50e03b1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m9vdw\" (UID: \"a02a14aa-070b-42e4-a44b-bb6bd50e03b1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067367 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1e9e06d9-6682-4f7f-a4a0-36414213490b-registration-dir\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067388 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6-srv-cert\") pod \"catalog-operator-68c6474976-ljqd4\" (UID: \"08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067402 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6295fb40-33f8-4b77-8c3e-d36037efa07e-service-ca-bundle\") pod \"router-default-5444994796-vm26v\" (UID: \"6295fb40-33f8-4b77-8c3e-d36037efa07e\") " pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067416 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7656576b-aeae-4b15-b2ab-18658770a1e5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w75kk\" (UID: \"7656576b-aeae-4b15-b2ab-18658770a1e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067434 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6295fb40-33f8-4b77-8c3e-d36037efa07e-default-certificate\") pod \"router-default-5444994796-vm26v\" (UID: \"6295fb40-33f8-4b77-8c3e-d36037efa07e\") " pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067450 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2106d7e3-6e73-4305-aa0b-e96af20f2a18-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k2jgl\" (UID: \"2106d7e3-6e73-4305-aa0b-e96af20f2a18\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067466 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2106d7e3-6e73-4305-aa0b-e96af20f2a18-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k2jgl\" (UID: \"2106d7e3-6e73-4305-aa0b-e96af20f2a18\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067480 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nztw\" (UniqueName: \"kubernetes.io/projected/acfa8e86-e574-4d67-91f7-35a45e1956ee-kube-api-access-4nztw\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067494 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-secret-volume\") pod \"collect-profiles-29492025-hbn26\" (UID: \"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067510 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1a517ef-fa49-4652-9039-30ef3b824353-trusted-ca\") pod \"ingress-operator-5b745b69d9-jsk4m\" (UID: \"a1a517ef-fa49-4652-9039-30ef3b824353\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067529 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97635386-64c1-4f78-88f2-17faa845b6b2-apiservice-cert\") pod \"packageserver-d55dfcdfc-ssxkp\" (UID: \"97635386-64c1-4f78-88f2-17faa845b6b2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067556 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6krk\" (UniqueName: \"kubernetes.io/projected/a1a517ef-fa49-4652-9039-30ef3b824353-kube-api-access-k6krk\") pod \"ingress-operator-5b745b69d9-jsk4m\" (UID: \"a1a517ef-fa49-4652-9039-30ef3b824353\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067575 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1e9e06d9-6682-4f7f-a4a0-36414213490b-plugins-dir\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067593 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2118bb9-5519-4158-9acc-e9e434d6da7d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mbz96\" (UID: \"d2118bb9-5519-4158-9acc-e9e434d6da7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067622 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwjj5\" (UniqueName: \"kubernetes.io/projected/6038ea14-7eab-4587-942f-acba6a8a100a-kube-api-access-kwjj5\") pod \"machine-config-operator-74547568cd-8tqf2\" (UID: \"6038ea14-7eab-4587-942f-acba6a8a100a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067640 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9br86\" (UniqueName: \"kubernetes.io/projected/8b6ea188-6622-4aaa-b9a1-209ff123514f-kube-api-access-9br86\") pod \"olm-operator-6b444d44fb-jnmjw\" (UID: \"8b6ea188-6622-4aaa-b9a1-209ff123514f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067655 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2106d7e3-6e73-4305-aa0b-e96af20f2a18-config\") pod \"kube-apiserver-operator-766d6c64bb-k2jgl\" (UID: \"2106d7e3-6e73-4305-aa0b-e96af20f2a18\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067673 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42b6b062-22d8-4a87-a0d1-8c24ec3e9637-config-volume\") pod \"dns-default-fxxfj\" (UID: \"42b6b062-22d8-4a87-a0d1-8c24ec3e9637\") " pod="openshift-dns/dns-default-fxxfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067692 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddj7j\" (UniqueName: \"kubernetes.io/projected/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-kube-api-access-ddj7j\") pod \"collect-profiles-29492025-hbn26\" (UID: \"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067716 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5z2k\" (UniqueName: \"kubernetes.io/projected/97635386-64c1-4f78-88f2-17faa845b6b2-kube-api-access-b5z2k\") pod \"packageserver-d55dfcdfc-ssxkp\" (UID: \"97635386-64c1-4f78-88f2-17faa845b6b2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067732 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/acfa8e86-e574-4d67-91f7-35a45e1956ee-etcd-client\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067748 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb797\" (UniqueName: \"kubernetes.io/projected/6295fb40-33f8-4b77-8c3e-d36037efa07e-kube-api-access-cb797\") pod \"router-default-5444994796-vm26v\" (UID: \"6295fb40-33f8-4b77-8c3e-d36037efa07e\") " pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.067763 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/628bfff0-2254-4c7c-a8a4-01b2288d8535-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mlnd8\" (UID: \"628bfff0-2254-4c7c-a8a4-01b2288d8535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlnd8" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.069884 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97635386-64c1-4f78-88f2-17faa845b6b2-tmpfs\") pod \"packageserver-d55dfcdfc-ssxkp\" (UID: \"97635386-64c1-4f78-88f2-17faa845b6b2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.070346 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6038ea14-7eab-4587-942f-acba6a8a100a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8tqf2\" (UID: \"6038ea14-7eab-4587-942f-acba6a8a100a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" Jan 27 13:46:31 crc kubenswrapper[4914]: E0127 13:46:31.070955 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:31.570926235 +0000 UTC m=+149.883276420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.073107 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1e9e06d9-6682-4f7f-a4a0-36414213490b-socket-dir\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.073739 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-config-volume\") pod \"collect-profiles-29492025-hbn26\" (UID: \"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.074127 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acfa8e86-e574-4d67-91f7-35a45e1956ee-config\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.074255 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1e9e06d9-6682-4f7f-a4a0-36414213490b-registration-dir\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.075202 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/558eb6ca-2441-4818-8d65-5323c39328c2-config\") pod \"service-ca-operator-777779d784-hsggd\" (UID: \"558eb6ca-2441-4818-8d65-5323c39328c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hsggd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.076564 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/acfa8e86-e574-4d67-91f7-35a45e1956ee-etcd-ca\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.076877 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1e9e06d9-6682-4f7f-a4a0-36414213490b-plugins-dir\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.077194 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6-profile-collector-cert\") pod \"catalog-operator-68c6474976-ljqd4\" (UID: \"08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.078192 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1e9e06d9-6682-4f7f-a4a0-36414213490b-csi-data-dir\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.078974 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02a14aa-070b-42e4-a44b-bb6bd50e03b1-config\") pod \"kube-controller-manager-operator-78b949d7b-m9vdw\" (UID: \"a02a14aa-070b-42e4-a44b-bb6bd50e03b1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.079033 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b8b420d-d475-4d19-84b8-113facbfcf09-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9mj7d\" (UID: \"9b8b420d-d475-4d19-84b8-113facbfcf09\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.079105 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2ab4b4e4-547c-4d23-bdb4-fc7f22902419-signing-cabundle\") pod \"service-ca-9c57cc56f-49b2h\" (UID: \"2ab4b4e4-547c-4d23-bdb4-fc7f22902419\") " pod="openshift-service-ca/service-ca-9c57cc56f-49b2h" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.079680 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6038ea14-7eab-4587-942f-acba6a8a100a-images\") pod \"machine-config-operator-74547568cd-8tqf2\" (UID: \"6038ea14-7eab-4587-942f-acba6a8a100a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.079879 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42b6b062-22d8-4a87-a0d1-8c24ec3e9637-config-volume\") pod \"dns-default-fxxfj\" (UID: \"42b6b062-22d8-4a87-a0d1-8c24ec3e9637\") " pod="openshift-dns/dns-default-fxxfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.082167 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6295fb40-33f8-4b77-8c3e-d36037efa07e-service-ca-bundle\") pod \"router-default-5444994796-vm26v\" (UID: \"6295fb40-33f8-4b77-8c3e-d36037efa07e\") " pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.082300 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/acfa8e86-e574-4d67-91f7-35a45e1956ee-etcd-service-ca\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.082335 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2118bb9-5519-4158-9acc-e9e434d6da7d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mbz96\" (UID: \"d2118bb9-5519-4158-9acc-e9e434d6da7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.083582 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7656576b-aeae-4b15-b2ab-18658770a1e5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w75kk\" (UID: \"7656576b-aeae-4b15-b2ab-18658770a1e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.086367 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1e9e06d9-6682-4f7f-a4a0-36414213490b-mountpoint-dir\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.092080 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2106d7e3-6e73-4305-aa0b-e96af20f2a18-config\") pod \"kube-apiserver-operator-766d6c64bb-k2jgl\" (UID: \"2106d7e3-6e73-4305-aa0b-e96af20f2a18\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.094216 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1a517ef-fa49-4652-9039-30ef3b824353-trusted-ca\") pod \"ingress-operator-5b745b69d9-jsk4m\" (UID: \"a1a517ef-fa49-4652-9039-30ef3b824353\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.097091 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3e4b037-3ec1-4d87-9434-1717d360ab61-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7dd65\" (UID: \"b3e4b037-3ec1-4d87-9434-1717d360ab61\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.097607 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mwzf8"] Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.098184 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b6ea188-6622-4aaa-b9a1-209ff123514f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jnmjw\" (UID: \"8b6ea188-6622-4aaa-b9a1-209ff123514f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.103391 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6295fb40-33f8-4b77-8c3e-d36037efa07e-metrics-certs\") pod \"router-default-5444994796-vm26v\" (UID: \"6295fb40-33f8-4b77-8c3e-d36037efa07e\") " pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.105755 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2106d7e3-6e73-4305-aa0b-e96af20f2a18-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k2jgl\" (UID: \"2106d7e3-6e73-4305-aa0b-e96af20f2a18\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.106082 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwjt4\" (UniqueName: \"kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-kube-api-access-zwjt4\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.111469 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4b872ac-004a-43d1-b1d1-a12f2ac2f3f4-cert\") pod \"ingress-canary-f8kzr\" (UID: \"b4b872ac-004a-43d1-b1d1-a12f2ac2f3f4\") " pod="openshift-ingress-canary/ingress-canary-f8kzr" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.111574 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c2ae126b-c993-4138-b9a6-dc7a99e9f69e-certs\") pod \"machine-config-server-8skmg\" (UID: \"c2ae126b-c993-4138-b9a6-dc7a99e9f69e\") " pod="openshift-machine-config-operator/machine-config-server-8skmg" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.111588 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cb849428-e713-45bf-b3d4-ddd350825372-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rkhw5\" (UID: \"cb849428-e713-45bf-b3d4-ddd350825372\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rkhw5" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.111742 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/42b6b062-22d8-4a87-a0d1-8c24ec3e9637-metrics-tls\") pod \"dns-default-fxxfj\" (UID: \"42b6b062-22d8-4a87-a0d1-8c24ec3e9637\") " pod="openshift-dns/dns-default-fxxfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.111914 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a1a517ef-fa49-4652-9039-30ef3b824353-metrics-tls\") pod \"ingress-operator-5b745b69d9-jsk4m\" (UID: \"a1a517ef-fa49-4652-9039-30ef3b824353\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.112013 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2118bb9-5519-4158-9acc-e9e434d6da7d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mbz96\" (UID: \"d2118bb9-5519-4158-9acc-e9e434d6da7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.112153 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6038ea14-7eab-4587-942f-acba6a8a100a-proxy-tls\") pod \"machine-config-operator-74547568cd-8tqf2\" (UID: \"6038ea14-7eab-4587-942f-acba6a8a100a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.112315 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7656576b-aeae-4b15-b2ab-18658770a1e5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w75kk\" (UID: \"7656576b-aeae-4b15-b2ab-18658770a1e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.112355 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b6ea188-6622-4aaa-b9a1-209ff123514f-srv-cert\") pod \"olm-operator-6b444d44fb-jnmjw\" (UID: \"8b6ea188-6622-4aaa-b9a1-209ff123514f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.113213 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acfa8e86-e574-4d67-91f7-35a45e1956ee-serving-cert\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.113563 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6295fb40-33f8-4b77-8c3e-d36037efa07e-default-certificate\") pod \"router-default-5444994796-vm26v\" (UID: \"6295fb40-33f8-4b77-8c3e-d36037efa07e\") " pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.115055 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b8b420d-d475-4d19-84b8-113facbfcf09-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9mj7d\" (UID: \"9b8b420d-d475-4d19-84b8-113facbfcf09\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.115238 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97635386-64c1-4f78-88f2-17faa845b6b2-webhook-cert\") pod \"packageserver-d55dfcdfc-ssxkp\" (UID: \"97635386-64c1-4f78-88f2-17faa845b6b2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.115670 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2ab4b4e4-547c-4d23-bdb4-fc7f22902419-signing-key\") pod \"service-ca-9c57cc56f-49b2h\" (UID: \"2ab4b4e4-547c-4d23-bdb4-fc7f22902419\") " pod="openshift-service-ca/service-ca-9c57cc56f-49b2h" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.115817 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c2ae126b-c993-4138-b9a6-dc7a99e9f69e-node-bootstrap-token\") pod \"machine-config-server-8skmg\" (UID: \"c2ae126b-c993-4138-b9a6-dc7a99e9f69e\") " pod="openshift-machine-config-operator/machine-config-server-8skmg" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.118363 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02a14aa-070b-42e4-a44b-bb6bd50e03b1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m9vdw\" (UID: \"a02a14aa-070b-42e4-a44b-bb6bd50e03b1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.120033 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc8h6\" (UniqueName: \"kubernetes.io/projected/2735a1a5-8ace-4c82-94ce-2d81e310612e-kube-api-access-cc8h6\") pod \"console-operator-58897d9998-wnsrs\" (UID: \"2735a1a5-8ace-4c82-94ce-2d81e310612e\") " pod="openshift-console-operator/console-operator-58897d9998-wnsrs" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.120109 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/628bfff0-2254-4c7c-a8a4-01b2288d8535-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mlnd8\" (UID: \"628bfff0-2254-4c7c-a8a4-01b2288d8535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlnd8" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.120476 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/acfa8e86-e574-4d67-91f7-35a45e1956ee-etcd-client\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.121241 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97635386-64c1-4f78-88f2-17faa845b6b2-apiservice-cert\") pod \"packageserver-d55dfcdfc-ssxkp\" (UID: \"97635386-64c1-4f78-88f2-17faa845b6b2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.121482 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/558eb6ca-2441-4818-8d65-5323c39328c2-serving-cert\") pod \"service-ca-operator-777779d784-hsggd\" (UID: \"558eb6ca-2441-4818-8d65-5323c39328c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hsggd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.121499 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6295fb40-33f8-4b77-8c3e-d36037efa07e-stats-auth\") pod \"router-default-5444994796-vm26v\" (UID: \"6295fb40-33f8-4b77-8c3e-d36037efa07e\") " pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.123594 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6-srv-cert\") pod \"catalog-operator-68c6474976-ljqd4\" (UID: \"08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.125230 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/411381d4-3ea7-4578-b962-f2629f8ba142-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rdxmp\" (UID: \"411381d4-3ea7-4578-b962-f2629f8ba142\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.125504 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-secret-volume\") pod \"collect-profiles-29492025-hbn26\" (UID: \"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.135911 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztf9j\" (UniqueName: \"kubernetes.io/projected/6d298bcb-b3cc-4a2d-a963-6930301982e0-kube-api-access-ztf9j\") pod \"dns-operator-744455d44c-b4l6h\" (UID: \"6d298bcb-b3cc-4a2d-a963-6930301982e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-b4l6h" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.145689 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wnsrs" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.160244 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.169951 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84wvc\" (UniqueName: \"kubernetes.io/projected/b4b872ac-004a-43d1-b1d1-a12f2ac2f3f4-kube-api-access-84wvc\") pod \"ingress-canary-f8kzr\" (UID: \"b4b872ac-004a-43d1-b1d1-a12f2ac2f3f4\") " pod="openshift-ingress-canary/ingress-canary-f8kzr" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.176427 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:31 crc kubenswrapper[4914]: E0127 13:46:31.176987 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:31.67696872 +0000 UTC m=+149.989318805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.193516 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2118bb9-5519-4158-9acc-e9e434d6da7d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mbz96\" (UID: \"d2118bb9-5519-4158-9acc-e9e434d6da7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.212266 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87"] Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.215931 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ncjj\" (UniqueName: \"kubernetes.io/projected/1e9e06d9-6682-4f7f-a4a0-36414213490b-kube-api-access-8ncjj\") pod \"csi-hostpathplugin-gzlfj\" (UID: \"1e9e06d9-6682-4f7f-a4a0-36414213490b\") " pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.227474 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb797\" (UniqueName: \"kubernetes.io/projected/6295fb40-33f8-4b77-8c3e-d36037efa07e-kube-api-access-cb797\") pod \"router-default-5444994796-vm26v\" (UID: \"6295fb40-33f8-4b77-8c3e-d36037efa07e\") " pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.238766 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.250673 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdwjd\" (UniqueName: \"kubernetes.io/projected/558eb6ca-2441-4818-8d65-5323c39328c2-kube-api-access-hdwjd\") pod \"service-ca-operator-777779d784-hsggd\" (UID: \"558eb6ca-2441-4818-8d65-5323c39328c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hsggd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.260817 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hsggd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.269455 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qm7f\" (UniqueName: \"kubernetes.io/projected/cb849428-e713-45bf-b3d4-ddd350825372-kube-api-access-8qm7f\") pod \"multus-admission-controller-857f4d67dd-rkhw5\" (UID: \"cb849428-e713-45bf-b3d4-ddd350825372\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rkhw5" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.271638 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-62njv"] Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.277604 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:31 crc kubenswrapper[4914]: E0127 13:46:31.277982 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:31.777966222 +0000 UTC m=+150.090316307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.287328 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1a517ef-fa49-4652-9039-30ef3b824353-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jsk4m\" (UID: \"a1a517ef-fa49-4652-9039-30ef3b824353\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.305380 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm2rb\" (UniqueName: \"kubernetes.io/projected/2ab4b4e4-547c-4d23-bdb4-fc7f22902419-kube-api-access-zm2rb\") pod \"service-ca-9c57cc56f-49b2h\" (UID: \"2ab4b4e4-547c-4d23-bdb4-fc7f22902419\") " pod="openshift-service-ca/service-ca-9c57cc56f-49b2h" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.310638 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.328248 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klc67\" (UniqueName: \"kubernetes.io/projected/08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6-kube-api-access-klc67\") pod \"catalog-operator-68c6474976-ljqd4\" (UID: \"08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.344172 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vw28j"] Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.347977 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.349774 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96dfm\" (UniqueName: \"kubernetes.io/projected/9b8b420d-d475-4d19-84b8-113facbfcf09-kube-api-access-96dfm\") pod \"kube-storage-version-migrator-operator-b67b599dd-9mj7d\" (UID: \"9b8b420d-d475-4d19-84b8-113facbfcf09\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d" Jan 27 13:46:31 crc kubenswrapper[4914]: W0127 13:46:31.360109 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6295fb40_33f8_4b77_8c3e_d36037efa07e.slice/crio-1164742b8c3430a6c605668cf9ce14951a78470300b68bde710fe44fa249f05b WatchSource:0}: Error finding container 1164742b8c3430a6c605668cf9ce14951a78470300b68bde710fe44fa249f05b: Status 404 returned error can't find the container with id 1164742b8c3430a6c605668cf9ce14951a78470300b68bde710fe44fa249f05b Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.365199 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmbh2\" (UniqueName: \"kubernetes.io/projected/b3e4b037-3ec1-4d87-9434-1717d360ab61-kube-api-access-kmbh2\") pod \"package-server-manager-789f6589d5-7dd65\" (UID: \"b3e4b037-3ec1-4d87-9434-1717d360ab61\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.369237 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wnsrs"] Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.370403 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-f8kzr" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.378295 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:31 crc kubenswrapper[4914]: E0127 13:46:31.378421 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:31.878402839 +0000 UTC m=+150.190752924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.378542 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:31 crc kubenswrapper[4914]: E0127 13:46:31.379055 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:31.879043935 +0000 UTC m=+150.191394020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.384085 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8"] Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.385403 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr2dg\" (UniqueName: \"kubernetes.io/projected/8c4742c3-5115-49f4-85ed-a3eda8373114-kube-api-access-mr2dg\") pod \"migrator-59844c95c7-cptz2\" (UID: \"8c4742c3-5115-49f4-85ed-a3eda8373114\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cptz2" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.404591 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.405964 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" Jan 27 13:46:31 crc kubenswrapper[4914]: W0127 13:46:31.408222 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f09ceb8_63c3_4f6a_9775_08ca7702b559.slice/crio-0bb50124a525d4bdedc6ee0b47cec03d85f4d0c97af15ce150c985031b5cba47 WatchSource:0}: Error finding container 0bb50124a525d4bdedc6ee0b47cec03d85f4d0c97af15ce150c985031b5cba47: Status 404 returned error can't find the container with id 0bb50124a525d4bdedc6ee0b47cec03d85f4d0c97af15ce150c985031b5cba47 Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.411753 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h87q6\" (UniqueName: \"kubernetes.io/projected/42b6b062-22d8-4a87-a0d1-8c24ec3e9637-kube-api-access-h87q6\") pod \"dns-default-fxxfj\" (UID: \"42b6b062-22d8-4a87-a0d1-8c24ec3e9637\") " pod="openshift-dns/dns-default-fxxfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.418974 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-b4l6h" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.429008 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6krk\" (UniqueName: \"kubernetes.io/projected/a1a517ef-fa49-4652-9039-30ef3b824353-kube-api-access-k6krk\") pod \"ingress-operator-5b745b69d9-jsk4m\" (UID: \"a1a517ef-fa49-4652-9039-30ef3b824353\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.456021 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddj7j\" (UniqueName: \"kubernetes.io/projected/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-kube-api-access-ddj7j\") pod \"collect-profiles-29492025-hbn26\" (UID: \"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.464880 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2106d7e3-6e73-4305-aa0b-e96af20f2a18-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k2jgl\" (UID: \"2106d7e3-6e73-4305-aa0b-e96af20f2a18\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.480942 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:31 crc kubenswrapper[4914]: E0127 13:46:31.481294 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:31.98127389 +0000 UTC m=+150.293623975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.494312 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfl9x\" (UniqueName: \"kubernetes.io/projected/7656576b-aeae-4b15-b2ab-18658770a1e5-kube-api-access-dfl9x\") pod \"marketplace-operator-79b997595-w75kk\" (UID: \"7656576b-aeae-4b15-b2ab-18658770a1e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.505898 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5z2k\" (UniqueName: \"kubernetes.io/projected/97635386-64c1-4f78-88f2-17faa845b6b2-kube-api-access-b5z2k\") pod \"packageserver-d55dfcdfc-ssxkp\" (UID: \"97635386-64c1-4f78-88f2-17faa845b6b2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.531071 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.535304 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5n67\" (UniqueName: \"kubernetes.io/projected/628bfff0-2254-4c7c-a8a4-01b2288d8535-kube-api-access-c5n67\") pod \"control-plane-machine-set-operator-78cbb6b69f-mlnd8\" (UID: \"628bfff0-2254-4c7c-a8a4-01b2288d8535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlnd8" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.546754 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96"] Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.555192 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rkhw5" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.555248 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlnd8" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.556712 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwjj5\" (UniqueName: \"kubernetes.io/projected/6038ea14-7eab-4587-942f-acba6a8a100a-kube-api-access-kwjj5\") pod \"machine-config-operator-74547568cd-8tqf2\" (UID: \"6038ea14-7eab-4587-942f-acba6a8a100a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.568415 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.575552 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.580415 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bfg8\" (UniqueName: \"kubernetes.io/projected/c2ae126b-c993-4138-b9a6-dc7a99e9f69e-kube-api-access-9bfg8\") pod \"machine-config-server-8skmg\" (UID: \"c2ae126b-c993-4138-b9a6-dc7a99e9f69e\") " pod="openshift-machine-config-operator/machine-config-server-8skmg" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.583501 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.583848 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:31 crc kubenswrapper[4914]: E0127 13:46:31.584163 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.084148441 +0000 UTC m=+150.396498526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.592134 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-49b2h" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.592463 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9br86\" (UniqueName: \"kubernetes.io/projected/8b6ea188-6622-4aaa-b9a1-209ff123514f-kube-api-access-9br86\") pod \"olm-operator-6b444d44fb-jnmjw\" (UID: \"8b6ea188-6622-4aaa-b9a1-209ff123514f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.597811 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.602378 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hsggd"] Jan 27 13:46:31 crc kubenswrapper[4914]: W0127 13:46:31.609647 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2118bb9_5519_4158_9acc_e9e434d6da7d.slice/crio-77fee78c943844b032f3dda956c75441f3060ce43f624cf5052a956c63dde2f3 WatchSource:0}: Error finding container 77fee78c943844b032f3dda956c75441f3060ce43f624cf5052a956c63dde2f3: Status 404 returned error can't find the container with id 77fee78c943844b032f3dda956c75441f3060ce43f624cf5052a956c63dde2f3 Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.611721 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nztw\" (UniqueName: \"kubernetes.io/projected/acfa8e86-e574-4d67-91f7-35a45e1956ee-kube-api-access-4nztw\") pod \"etcd-operator-b45778765-6tlvd\" (UID: \"acfa8e86-e574-4d67-91f7-35a45e1956ee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.623397 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.630280 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02a14aa-070b-42e4-a44b-bb6bd50e03b1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m9vdw\" (UID: \"a02a14aa-070b-42e4-a44b-bb6bd50e03b1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.643166 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cptz2" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.643186 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.655585 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.663412 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8skmg" Jan 27 13:46:31 crc kubenswrapper[4914]: W0127 13:46:31.665758 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod558eb6ca_2441_4818_8d65_5323c39328c2.slice/crio-4aaf72fb2faef502479b1358ac7ca374ed27a9cf5d85372f16d1972204f9b46b WatchSource:0}: Error finding container 4aaf72fb2faef502479b1358ac7ca374ed27a9cf5d85372f16d1972204f9b46b: Status 404 returned error can't find the container with id 4aaf72fb2faef502479b1358ac7ca374ed27a9cf5d85372f16d1972204f9b46b Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.678057 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fxxfj" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.685447 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:31 crc kubenswrapper[4914]: E0127 13:46:31.685935 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.185917823 +0000 UTC m=+150.498267908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.735175 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-cmsxp" podStartSLOduration=128.73515181 podStartE2EDuration="2m8.73515181s" podCreationTimestamp="2026-01-27 13:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:31.727655272 +0000 UTC m=+150.040005377" watchObservedRunningTime="2026-01-27 13:46:31.73515181 +0000 UTC m=+150.047501895" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.754243 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4"] Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.787654 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:31 crc kubenswrapper[4914]: E0127 13:46:31.788191 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.288170658 +0000 UTC m=+150.600520743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.816457 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.822969 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.860534 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-f8kzr"] Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.863112 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mwzf8" event={"ID":"7add3664-f0a1-4575-bc02-ff364cf808b7","Type":"ContainerStarted","Data":"be878b3a27d22a0a442960b6d03c38bbfab5d1bb5ceebbc90727f878841cd78d"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.863161 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mwzf8" event={"ID":"7add3664-f0a1-4575-bc02-ff364cf808b7","Type":"ContainerStarted","Data":"84f4b3bfe47ca4d098b552b0a76541d4d526f96fda2346ac7786313b6f96a2fc"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.863608 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mwzf8" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.865105 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" event={"ID":"5bc9d257-6992-48cf-963b-42c22a5dd170","Type":"ContainerStarted","Data":"5598772a9005461893aa39d16edf9e3bfd6781330abb14f48ae1ae01e8b2151a"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.865128 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" event={"ID":"5bc9d257-6992-48cf-963b-42c22a5dd170","Type":"ContainerStarted","Data":"24e222b26766c01813eab0ba08d0391cbf866246b25298acdc54dbe6216506a9"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.865364 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.866161 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.866195 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.866603 4914 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-f8vp8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.866626 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" podUID="5bc9d257-6992-48cf-963b-42c22a5dd170" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.867021 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv" event={"ID":"e7c1b522-886f-41d2-b7da-a6d4316c3b31","Type":"ContainerStarted","Data":"dc9c5b2bac399d226d056c7d005c02184496e2947ad46449d5764d87e875c6e1"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.867047 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv" event={"ID":"e7c1b522-886f-41d2-b7da-a6d4316c3b31","Type":"ContainerStarted","Data":"31a539c8cbd31993127431d893ab39018ac6c7ad5d312f3e725763bb94e41608"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.867058 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv" event={"ID":"e7c1b522-886f-41d2-b7da-a6d4316c3b31","Type":"ContainerStarted","Data":"a9bcdbad9a68128d3a2dcb7a650dfebfc573482f3376cb1f4e53f1c052b11f01"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.867777 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hsggd" event={"ID":"558eb6ca-2441-4818-8d65-5323c39328c2","Type":"ContainerStarted","Data":"4aaf72fb2faef502479b1358ac7ca374ed27a9cf5d85372f16d1972204f9b46b"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.869659 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vm26v" event={"ID":"6295fb40-33f8-4b77-8c3e-d36037efa07e","Type":"ContainerStarted","Data":"1164742b8c3430a6c605668cf9ce14951a78470300b68bde710fe44fa249f05b"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.870557 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96" event={"ID":"d2118bb9-5519-4158-9acc-e9e434d6da7d","Type":"ContainerStarted","Data":"77fee78c943844b032f3dda956c75441f3060ce43f624cf5052a956c63dde2f3"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.871536 4914 generic.go:334] "Generic (PLEG): container finished" podID="9899e103-466c-4fb4-887b-916ca5e7ca72" containerID="12afe1c78eabf06b06bea426850bc6ad1038ff1e79d5745401c3752db7d15055" exitCode=0 Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.871586 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" event={"ID":"9899e103-466c-4fb4-887b-916ca5e7ca72","Type":"ContainerDied","Data":"12afe1c78eabf06b06bea426850bc6ad1038ff1e79d5745401c3752db7d15055"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.873262 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" event={"ID":"745ec1ee-15c4-456b-9e1e-9015e27c4845","Type":"ContainerStarted","Data":"96d00e119b54d33d24bb92af55242d719bd335e0a6aa36e50021963f67904f51"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.873286 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" event={"ID":"745ec1ee-15c4-456b-9e1e-9015e27c4845","Type":"ContainerStarted","Data":"8fc5cdd90f9ef1120b1353374f41ee906fdc3382292332fc368cb6cb070030fa"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.874491 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.875679 4914 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nv9bs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.875731 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" podUID="745ec1ee-15c4-456b-9e1e-9015e27c4845" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.876375 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" event={"ID":"543ed275-142d-4301-a0c2-33a99233ee0d","Type":"ContainerStarted","Data":"e9420eca5bfd249d667fcb19710a6544b0200c0629f5d74bac852745a89c0208"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.878410 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wnsrs" event={"ID":"2735a1a5-8ace-4c82-94ce-2d81e310612e","Type":"ContainerStarted","Data":"08b2a49d4ffc9d186fc1b48a2e38a7ebfa4c160f3a74d4e18c0967e8a04e8586"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.878462 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wnsrs" event={"ID":"2735a1a5-8ace-4c82-94ce-2d81e310612e","Type":"ContainerStarted","Data":"71d2783170a797e885220c8ac76a3e7da30733b4b1df5bf99e8b7220d379d340"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.881736 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8" event={"ID":"5f09ceb8-63c3-4f6a-9775-08ca7702b559","Type":"ContainerStarted","Data":"e4b74d82f76806198f2921e74a51d936274b377d920ffddd04f3286d1581e8e5"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.881783 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8" event={"ID":"5f09ceb8-63c3-4f6a-9775-08ca7702b559","Type":"ContainerStarted","Data":"0bb50124a525d4bdedc6ee0b47cec03d85f4d0c97af15ce150c985031b5cba47"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.889583 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:31 crc kubenswrapper[4914]: E0127 13:46:31.889731 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.389706993 +0000 UTC m=+150.702057078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.889999 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:31 crc kubenswrapper[4914]: E0127 13:46:31.890273 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.390261409 +0000 UTC m=+150.702611494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.892931 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" event={"ID":"b6bbe5d3-1e4c-4790-9216-6cc5499a2e09","Type":"ContainerStarted","Data":"e61413d600141d3bbd9a08f484c0f3f43349cfff9aa8fcc95aed628fec7cb516"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.899404 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-62njv" event={"ID":"109be131-7cbd-4205-b5c7-eaf7790737f4","Type":"ContainerStarted","Data":"698b81a8e6bc81a0eec15627733b169e6a9b73343c0fa17729ff40e85dd3d605"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.907521 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rktkr" event={"ID":"90260720-9ce0-4da9-932b-34f7ce235091","Type":"ContainerStarted","Data":"f4e498cbfa22d30ec98a7de71d843b62bc07e55205f02abf88fc4eda84dd5bde"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.910125 4914 generic.go:334] "Generic (PLEG): container finished" podID="589608b0-5454-404a-acd7-f164145a1bc0" containerID="4e788dd74134ad51f69121f0113330a7db9d188b66b69c47ba5eed4a38cd0a4e" exitCode=0 Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.910321 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" event={"ID":"589608b0-5454-404a-acd7-f164145a1bc0","Type":"ContainerDied","Data":"4e788dd74134ad51f69121f0113330a7db9d188b66b69c47ba5eed4a38cd0a4e"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.910358 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" event={"ID":"589608b0-5454-404a-acd7-f164145a1bc0","Type":"ContainerStarted","Data":"abe8dce7cd6b60ea963e6deecaf835a010c80c95ae665789992a78782a88ef2c"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.915264 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw" Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.920746 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2" event={"ID":"fd8b40eb-b619-4662-b1d1-056c912b7d88","Type":"ContainerStarted","Data":"51845395ea9b9fcab4746b9dce79ea08d127320923d48c5cc216005947db84d9"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.920796 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2" event={"ID":"fd8b40eb-b619-4662-b1d1-056c912b7d88","Type":"ContainerStarted","Data":"dcf44d7105ea160464c4596711894ceb6e0d3d44c8ed75d2a98cf7d5e20d8f32"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.934132 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" event={"ID":"0074c027-d7a9-4958-81dc-65a378eb8910","Type":"ContainerStarted","Data":"3a8581e2d4e923b228bb1f22526157ecb6b97615bfc186d5606b1bd571a79b5d"} Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.936666 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vw28j" event={"ID":"630b7825-f758-4b21-ad2a-f08f54b23dfb","Type":"ContainerStarted","Data":"183b786427bcdba945603d1305540066fbbf31893972c62d7ecfebeafd8ad0de"} Jan 27 13:46:31 crc kubenswrapper[4914]: W0127 13:46:31.972147 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4b872ac_004a_43d1_b1d1_a12f2ac2f3f4.slice/crio-c29c229937a0d307ee5968d4f309c65f0315b4a9f83846efb3806626d6569009 WatchSource:0}: Error finding container c29c229937a0d307ee5968d4f309c65f0315b4a9f83846efb3806626d6569009: Status 404 returned error can't find the container with id c29c229937a0d307ee5968d4f309c65f0315b4a9f83846efb3806626d6569009 Jan 27 13:46:31 crc kubenswrapper[4914]: W0127 13:46:31.981041 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2ae126b_c993_4138_b9a6_dc7a99e9f69e.slice/crio-272cf02c7dff6ab716e74c74566c53f382e5d0cb00738635aa4178a024bc91f5 WatchSource:0}: Error finding container 272cf02c7dff6ab716e74c74566c53f382e5d0cb00738635aa4178a024bc91f5: Status 404 returned error can't find the container with id 272cf02c7dff6ab716e74c74566c53f382e5d0cb00738635aa4178a024bc91f5 Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.989518 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp"] Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.990641 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:31 crc kubenswrapper[4914]: E0127 13:46:31.993421 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.493397866 +0000 UTC m=+150.805747941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:31 crc kubenswrapper[4914]: I0127 13:46:31.995375 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:31 crc kubenswrapper[4914]: E0127 13:46:31.998018 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.497973347 +0000 UTC m=+150.810323532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.100375 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:32 crc kubenswrapper[4914]: E0127 13:46:32.100538 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.60051787 +0000 UTC m=+150.912867955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.100993 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:32 crc kubenswrapper[4914]: E0127 13:46:32.101437 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.601421944 +0000 UTC m=+150.913772039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.185052 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65"] Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.201864 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:32 crc kubenswrapper[4914]: E0127 13:46:32.202086 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.702066716 +0000 UTC m=+151.014416801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.202162 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:32 crc kubenswrapper[4914]: E0127 13:46:32.202673 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.702665902 +0000 UTC m=+151.015015987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.212783 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b4l6h"] Jan 27 13:46:32 crc kubenswrapper[4914]: W0127 13:46:32.293058 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d298bcb_b3cc_4a2d_a963_6930301982e0.slice/crio-5c16367a0dafb8983bf3f7cfed34e63dc206db2bfa97640ae31b643d004398a5 WatchSource:0}: Error finding container 5c16367a0dafb8983bf3f7cfed34e63dc206db2bfa97640ae31b643d004398a5: Status 404 returned error can't find the container with id 5c16367a0dafb8983bf3f7cfed34e63dc206db2bfa97640ae31b643d004398a5 Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.303356 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.304029 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.304092 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:32 crc kubenswrapper[4914]: E0127 13:46:32.304717 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.80468794 +0000 UTC m=+151.117038025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.315845 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlnd8"] Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.332019 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.405306 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.405347 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.405402 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:46:32 crc kubenswrapper[4914]: E0127 13:46:32.405917 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:32.905897878 +0000 UTC m=+151.218247963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.410657 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.411385 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.428876 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.430791 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.441749 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.451445 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gzlfj"] Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.455187 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d"] Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.470988 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-49b2h"] Jan 27 13:46:32 crc kubenswrapper[4914]: W0127 13:46:32.480208 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod628bfff0_2254_4c7c_a8a4_01b2288d8535.slice/crio-e8297b6908180ab6b21c177d23a371c562bed65cc0b7067dfdcbfe69b4db7319 WatchSource:0}: Error finding container e8297b6908180ab6b21c177d23a371c562bed65cc0b7067dfdcbfe69b4db7319: Status 404 returned error can't find the container with id e8297b6908180ab6b21c177d23a371c562bed65cc0b7067dfdcbfe69b4db7319 Jan 27 13:46:32 crc kubenswrapper[4914]: W0127 13:46:32.499811 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e9e06d9_6682_4f7f_a4a0_36414213490b.slice/crio-5fe6db949c60e15fc5609fb48dc7b68d4a49a4d4c7500b5ceb232792447c773f WatchSource:0}: Error finding container 5fe6db949c60e15fc5609fb48dc7b68d4a49a4d4c7500b5ceb232792447c773f: Status 404 returned error can't find the container with id 5fe6db949c60e15fc5609fb48dc7b68d4a49a4d4c7500b5ceb232792447c773f Jan 27 13:46:32 crc kubenswrapper[4914]: W0127 13:46:32.505026 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b8b420d_d475_4d19_84b8_113facbfcf09.slice/crio-f0896bbdd3f439cbd6d581afe49fd950f53c63a985b104a05f4929feeaede039 WatchSource:0}: Error finding container f0896bbdd3f439cbd6d581afe49fd950f53c63a985b104a05f4929feeaede039: Status 404 returned error can't find the container with id f0896bbdd3f439cbd6d581afe49fd950f53c63a985b104a05f4929feeaede039 Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.505941 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:32 crc kubenswrapper[4914]: E0127 13:46:32.506117 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:33.006085998 +0000 UTC m=+151.318436083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.506321 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:32 crc kubenswrapper[4914]: E0127 13:46:32.506727 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:33.006717536 +0000 UTC m=+151.319067631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.607133 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:32 crc kubenswrapper[4914]: E0127 13:46:32.607386 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:33.107325776 +0000 UTC m=+151.419675871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.607469 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:32 crc kubenswrapper[4914]: E0127 13:46:32.607800 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:33.107783379 +0000 UTC m=+151.420133464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.616340 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cptz2"] Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.620506 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w75kk"] Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.625533 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp"] Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.628992 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl"] Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.632257 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26"] Jan 27 13:46:32 crc kubenswrapper[4914]: W0127 13:46:32.676026 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7656576b_aeae_4b15_b2ab_18658770a1e5.slice/crio-f00c094b2e022af70c559db27cdfbc4a8573e2bbbe1d2dfaaaae45ec9835e66b WatchSource:0}: Error finding container f00c094b2e022af70c559db27cdfbc4a8573e2bbbe1d2dfaaaae45ec9835e66b: Status 404 returned error can't find the container with id f00c094b2e022af70c559db27cdfbc4a8573e2bbbe1d2dfaaaae45ec9835e66b Jan 27 13:46:32 crc kubenswrapper[4914]: W0127 13:46:32.676387 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c4742c3_5115_49f4_85ed_a3eda8373114.slice/crio-f9860e2606fdbdd58cfc702c1565cb37db3484ca9459e4369bb9cf76173ebee7 WatchSource:0}: Error finding container f9860e2606fdbdd58cfc702c1565cb37db3484ca9459e4369bb9cf76173ebee7: Status 404 returned error can't find the container with id f9860e2606fdbdd58cfc702c1565cb37db3484ca9459e4369bb9cf76173ebee7 Jan 27 13:46:32 crc kubenswrapper[4914]: W0127 13:46:32.678936 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97635386_64c1_4f78_88f2_17faa845b6b2.slice/crio-d2e9a49c17c3bd05562876269f30ae7eb6970593212a7021fe6992ec92ba08a3 WatchSource:0}: Error finding container d2e9a49c17c3bd05562876269f30ae7eb6970593212a7021fe6992ec92ba08a3: Status 404 returned error can't find the container with id d2e9a49c17c3bd05562876269f30ae7eb6970593212a7021fe6992ec92ba08a3 Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.710342 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:32 crc kubenswrapper[4914]: E0127 13:46:32.710748 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:33.210730131 +0000 UTC m=+151.523080216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.711950 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.713256 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m"] Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.715028 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fxxfj"] Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.716913 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rkhw5"] Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.811616 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:32 crc kubenswrapper[4914]: E0127 13:46:32.812968 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:33.312952516 +0000 UTC m=+151.625302601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.844016 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw"] Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.857698 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw"] Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.860957 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2"] Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.865313 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6tlvd"] Jan 27 13:46:32 crc kubenswrapper[4914]: W0127 13:46:32.895618 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6038ea14_7eab_4587_942f_acba6a8a100a.slice/crio-09c06c08635dd82046c1fed5dd09ed7770fa063ae612a9509a6c5e2371c00a9c WatchSource:0}: Error finding container 09c06c08635dd82046c1fed5dd09ed7770fa063ae612a9509a6c5e2371c00a9c: Status 404 returned error can't find the container with id 09c06c08635dd82046c1fed5dd09ed7770fa063ae612a9509a6c5e2371c00a9c Jan 27 13:46:32 crc kubenswrapper[4914]: W0127 13:46:32.908489 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a517ef_fa49_4652_9039_30ef3b824353.slice/crio-841cb8fc02dfc8f0449b6157e0e5875067cbf83eecfcab83562190140dc05308 WatchSource:0}: Error finding container 841cb8fc02dfc8f0449b6157e0e5875067cbf83eecfcab83562190140dc05308: Status 404 returned error can't find the container with id 841cb8fc02dfc8f0449b6157e0e5875067cbf83eecfcab83562190140dc05308 Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.915161 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:32 crc kubenswrapper[4914]: E0127 13:46:32.915449 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:33.415434746 +0000 UTC m=+151.727784831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:32 crc kubenswrapper[4914]: W0127 13:46:32.925863 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda02a14aa_070b_42e4_a44b_bb6bd50e03b1.slice/crio-1bd668765c0149d3b0ee9a59bc2a2217f4b328a6069db4646aaabbe7ee3bc544 WatchSource:0}: Error finding container 1bd668765c0149d3b0ee9a59bc2a2217f4b328a6069db4646aaabbe7ee3bc544: Status 404 returned error can't find the container with id 1bd668765c0149d3b0ee9a59bc2a2217f4b328a6069db4646aaabbe7ee3bc544 Jan 27 13:46:32 crc kubenswrapper[4914]: W0127 13:46:32.948031 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-fb077a1b50da40386decf130347c32ce1541ea4f0de08de57ffcb10c0d10fac3 WatchSource:0}: Error finding container fb077a1b50da40386decf130347c32ce1541ea4f0de08de57ffcb10c0d10fac3: Status 404 returned error can't find the container with id fb077a1b50da40386decf130347c32ce1541ea4f0de08de57ffcb10c0d10fac3 Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.958284 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-62njv" event={"ID":"109be131-7cbd-4205-b5c7-eaf7790737f4","Type":"ContainerStarted","Data":"ba212f6328b03ccab122f1d64ec8b61dce5bb36cab230ffc84c14459559b3b18"} Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.958718 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.962064 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8skmg" event={"ID":"c2ae126b-c993-4138-b9a6-dc7a99e9f69e","Type":"ContainerStarted","Data":"272cf02c7dff6ab716e74c74566c53f382e5d0cb00738635aa4178a024bc91f5"} Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.965279 4914 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-62njv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" start-of-body= Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.965326 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-62njv" podUID="109be131-7cbd-4205-b5c7-eaf7790737f4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.975375 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b4l6h" event={"ID":"6d298bcb-b3cc-4a2d-a963-6930301982e0","Type":"ContainerStarted","Data":"5c16367a0dafb8983bf3f7cfed34e63dc206db2bfa97640ae31b643d004398a5"} Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.980120 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" event={"ID":"1e9e06d9-6682-4f7f-a4a0-36414213490b","Type":"ContainerStarted","Data":"5fe6db949c60e15fc5609fb48dc7b68d4a49a4d4c7500b5ceb232792447c773f"} Jan 27 13:46:32 crc kubenswrapper[4914]: I0127 13:46:32.983795 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" event={"ID":"97635386-64c1-4f78-88f2-17faa845b6b2","Type":"ContainerStarted","Data":"d2e9a49c17c3bd05562876269f30ae7eb6970593212a7021fe6992ec92ba08a3"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.003660 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vw28j" event={"ID":"630b7825-f758-4b21-ad2a-f08f54b23dfb","Type":"ContainerStarted","Data":"5634ec5fc92d00b391ac4697c303814df3649952e1ec34d1b225fd056fc67408"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.012264 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" event={"ID":"7656576b-aeae-4b15-b2ab-18658770a1e5","Type":"ContainerStarted","Data":"f00c094b2e022af70c559db27cdfbc4a8573e2bbbe1d2dfaaaae45ec9835e66b"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.013606 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" event={"ID":"acfa8e86-e574-4d67-91f7-35a45e1956ee","Type":"ContainerStarted","Data":"51715f0a937000452e82351a1df0707ba6e3908b9f1cb98eff1ebe92d7b46423"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.016283 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:33 crc kubenswrapper[4914]: E0127 13:46:33.017226 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:33.517207109 +0000 UTC m=+151.829557264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.017351 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" event={"ID":"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d","Type":"ContainerStarted","Data":"59225a8e287b776521095d0f09271dbde5745465912e884a3d8f01790cbeb1ee"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.019488 4914 generic.go:334] "Generic (PLEG): container finished" podID="543ed275-142d-4301-a0c2-33a99233ee0d" containerID="959d8d771012bc526642699cca5397719416cc69e00507c86ce3d6a57860c95d" exitCode=0 Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.019555 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" event={"ID":"543ed275-142d-4301-a0c2-33a99233ee0d","Type":"ContainerDied","Data":"959d8d771012bc526642699cca5397719416cc69e00507c86ce3d6a57860c95d"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.024859 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fxxfj" event={"ID":"42b6b062-22d8-4a87-a0d1-8c24ec3e9637","Type":"ContainerStarted","Data":"a01f5f71feae13a1a341da9b5669039e965fe11c99daa95744c68568bc5c72d0"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.030207 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" event={"ID":"a1a517ef-fa49-4652-9039-30ef3b824353","Type":"ContainerStarted","Data":"841cb8fc02dfc8f0449b6157e0e5875067cbf83eecfcab83562190140dc05308"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.034047 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" event={"ID":"8b6ea188-6622-4aaa-b9a1-209ff123514f","Type":"ContainerStarted","Data":"019b82dfee2daab6b1de072539b0c62ad0a2fd1e387f55f8af0836eb5f7ea60c"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.038773 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hsggd" event={"ID":"558eb6ca-2441-4818-8d65-5323c39328c2","Type":"ContainerStarted","Data":"02e880c74eabea531e037a77cbe1fd849194868042a6b0b690244ca446465d4c"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.045540 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65" event={"ID":"b3e4b037-3ec1-4d87-9434-1717d360ab61","Type":"ContainerStarted","Data":"2c133009fc970e6bd3c5fb02bc18ccc0952d501b6bc202026ba576fa92675c40"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.047142 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rkhw5" event={"ID":"cb849428-e713-45bf-b3d4-ddd350825372","Type":"ContainerStarted","Data":"7f5c5826a8fae402b29d6ed6a1843b7f5d9eb03e16ba0f76be331bb6b5a0baff"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.047914 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" event={"ID":"411381d4-3ea7-4578-b962-f2629f8ba142","Type":"ContainerStarted","Data":"cf2e3d6ec0473edab37e4f503e12135d65942437a7d6f4c1734a8f5321aa3122"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.049455 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vm26v" event={"ID":"6295fb40-33f8-4b77-8c3e-d36037efa07e","Type":"ContainerStarted","Data":"8c29fd649602de8f621e2214642f16e88bf5f37a7ef4f1c728ae7553d9bb356a"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.051019 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" event={"ID":"08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6","Type":"ContainerStarted","Data":"5fffd6cf7ed1e3d383e553f946d9ab8bf5aa09f0d7d53f49c336f79a6dbde811"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.054976 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-49b2h" event={"ID":"2ab4b4e4-547c-4d23-bdb4-fc7f22902419","Type":"ContainerStarted","Data":"148fd22ce7b689bfffda0199ed3798625959fe5c64f03015e3db95566fb2b0f6"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.065036 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw" event={"ID":"a02a14aa-070b-42e4-a44b-bb6bd50e03b1","Type":"ContainerStarted","Data":"1bd668765c0149d3b0ee9a59bc2a2217f4b328a6069db4646aaabbe7ee3bc544"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.065163 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-f8kzr" event={"ID":"b4b872ac-004a-43d1-b1d1-a12f2ac2f3f4","Type":"ContainerStarted","Data":"c29c229937a0d307ee5968d4f309c65f0315b4a9f83846efb3806626d6569009"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.071677 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlnd8" event={"ID":"628bfff0-2254-4c7c-a8a4-01b2288d8535","Type":"ContainerStarted","Data":"e8297b6908180ab6b21c177d23a371c562bed65cc0b7067dfdcbfe69b4db7319"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.076717 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" event={"ID":"6038ea14-7eab-4587-942f-acba6a8a100a","Type":"ContainerStarted","Data":"09c06c08635dd82046c1fed5dd09ed7770fa063ae612a9509a6c5e2371c00a9c"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.085229 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d" event={"ID":"9b8b420d-d475-4d19-84b8-113facbfcf09","Type":"ContainerStarted","Data":"f0896bbdd3f439cbd6d581afe49fd950f53c63a985b104a05f4929feeaede039"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.087234 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl" event={"ID":"2106d7e3-6e73-4305-aa0b-e96af20f2a18","Type":"ContainerStarted","Data":"5f23fb3f80f4e444ade7c81babe21bc143f98bc76b09844a9f9b85a9e17cebdb"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.091968 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cptz2" event={"ID":"8c4742c3-5115-49f4-85ed-a3eda8373114","Type":"ContainerStarted","Data":"f9860e2606fdbdd58cfc702c1565cb37db3484ca9459e4369bb9cf76173ebee7"} Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.097816 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.097949 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.097822 4914 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nv9bs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.099285 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" podUID="745ec1ee-15c4-456b-9e1e-9015e27c4845" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.110129 4914 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-f8vp8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.110176 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" podUID="5bc9d257-6992-48cf-963b-42c22a5dd170" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.118269 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:33 crc kubenswrapper[4914]: E0127 13:46:33.118622 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:33.61856037 +0000 UTC m=+151.930910465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.118860 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:33 crc kubenswrapper[4914]: E0127 13:46:33.120433 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:33.620416489 +0000 UTC m=+151.932766654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.171598 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-rktkr" podStartSLOduration=130.171529316 podStartE2EDuration="2m10.171529316s" podCreationTimestamp="2026-01-27 13:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:33.16863526 +0000 UTC m=+151.480985345" watchObservedRunningTime="2026-01-27 13:46:33.171529316 +0000 UTC m=+151.483879401" Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.220283 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:33 crc kubenswrapper[4914]: E0127 13:46:33.222607 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:33.722585462 +0000 UTC m=+152.034935547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.298212 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l2wc2" podStartSLOduration=130.298191054 podStartE2EDuration="2m10.298191054s" podCreationTimestamp="2026-01-27 13:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:33.287919843 +0000 UTC m=+151.600269928" watchObservedRunningTime="2026-01-27 13:46:33.298191054 +0000 UTC m=+151.610541139" Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.308172 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.310765 4914 patch_prober.go:28] interesting pod/router-default-5444994796-vm26v container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.310850 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm26v" podUID="6295fb40-33f8-4b77-8c3e-d36037efa07e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.322781 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:33 crc kubenswrapper[4914]: E0127 13:46:33.323157 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:33.823145662 +0000 UTC m=+152.135495747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:33 crc kubenswrapper[4914]: W0127 13:46:33.341736 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-eb6cc6b4239891cb8a8f5ead30fc2e5a6eff2096e367b163238f67dceb95c10d WatchSource:0}: Error finding container eb6cc6b4239891cb8a8f5ead30fc2e5a6eff2096e367b163238f67dceb95c10d: Status 404 returned error can't find the container with id eb6cc6b4239891cb8a8f5ead30fc2e5a6eff2096e367b163238f67dceb95c10d Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.400855 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" podStartSLOduration=129.40083764 podStartE2EDuration="2m9.40083764s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:33.364587824 +0000 UTC m=+151.676937909" watchObservedRunningTime="2026-01-27 13:46:33.40083764 +0000 UTC m=+151.713187715" Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.402794 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcxd8" podStartSLOduration=129.40278208 podStartE2EDuration="2m9.40278208s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:33.399958396 +0000 UTC m=+151.712308471" watchObservedRunningTime="2026-01-27 13:46:33.40278208 +0000 UTC m=+151.715132165" Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.423494 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:33 crc kubenswrapper[4914]: E0127 13:46:33.423620 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:33.92360331 +0000 UTC m=+152.235953395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.423899 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:33 crc kubenswrapper[4914]: E0127 13:46:33.424291 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:33.924280987 +0000 UTC m=+152.236631072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.524867 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:33 crc kubenswrapper[4914]: E0127 13:46:33.525086 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:34.025058903 +0000 UTC m=+152.337408988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.525446 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:33 crc kubenswrapper[4914]: E0127 13:46:33.526008 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:34.025966247 +0000 UTC m=+152.338316392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.564540 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd557" podStartSLOduration=130.564515743 podStartE2EDuration="2m10.564515743s" podCreationTimestamp="2026-01-27 13:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:33.563057395 +0000 UTC m=+151.875407490" watchObservedRunningTime="2026-01-27 13:46:33.564515743 +0000 UTC m=+151.876865848" Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.626819 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:33 crc kubenswrapper[4914]: E0127 13:46:33.627022 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:34.126978279 +0000 UTC m=+152.439328364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.627251 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:33 crc kubenswrapper[4914]: E0127 13:46:33.627585 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:34.127576926 +0000 UTC m=+152.439927011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.728501 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:33 crc kubenswrapper[4914]: E0127 13:46:33.728774 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:34.228758212 +0000 UTC m=+152.541108297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.829816 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:33 crc kubenswrapper[4914]: E0127 13:46:33.830215 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:34.330199406 +0000 UTC m=+152.642549501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.885737 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-z74jx" podStartSLOduration=129.885718129 podStartE2EDuration="2m9.885718129s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:33.88424697 +0000 UTC m=+152.196597055" watchObservedRunningTime="2026-01-27 13:46:33.885718129 +0000 UTC m=+152.198068214" Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.931316 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:33 crc kubenswrapper[4914]: E0127 13:46:33.932491 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:34.432456141 +0000 UTC m=+152.744806226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:33 crc kubenswrapper[4914]: I0127 13:46:33.971878 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-mwzf8" podStartSLOduration=129.971859879 podStartE2EDuration="2m9.971859879s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:33.928692141 +0000 UTC m=+152.241042226" watchObservedRunningTime="2026-01-27 13:46:33.971859879 +0000 UTC m=+152.284209964" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.033231 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:34 crc kubenswrapper[4914]: E0127 13:46:34.033892 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:34.533874544 +0000 UTC m=+152.846224639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.049055 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-vm26v" podStartSLOduration=130.049039603 podStartE2EDuration="2m10.049039603s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.004495819 +0000 UTC m=+152.316845904" watchObservedRunningTime="2026-01-27 13:46:34.049039603 +0000 UTC m=+152.361389688" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.100545 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-62njv" podStartSLOduration=131.100526571 podStartE2EDuration="2m11.100526571s" podCreationTimestamp="2026-01-27 13:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.099261977 +0000 UTC m=+152.411612072" watchObservedRunningTime="2026-01-27 13:46:34.100526571 +0000 UTC m=+152.412876656" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.134705 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:34 crc kubenswrapper[4914]: E0127 13:46:34.134892 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:34.634874016 +0000 UTC m=+152.947224101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.134986 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:34 crc kubenswrapper[4914]: E0127 13:46:34.135279 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:34.635271176 +0000 UTC m=+152.947621261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.143023 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" event={"ID":"97635386-64c1-4f78-88f2-17faa845b6b2","Type":"ContainerStarted","Data":"28b24c53fb05ad646b6302ce717457f29759b9d6e97e6b52fd986810282c8c8d"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.144238 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.154068 4914 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ssxkp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.154114 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" podUID="97635386-64c1-4f78-88f2-17faa845b6b2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.154875 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65" event={"ID":"b3e4b037-3ec1-4d87-9434-1717d360ab61","Type":"ContainerStarted","Data":"3c0c88f5d20d1a010143c7e868c0097ccf7e9b3cb9fd6c63e1d89ac46771c6da"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.154938 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65" event={"ID":"b3e4b037-3ec1-4d87-9434-1717d360ab61","Type":"ContainerStarted","Data":"eb1b4e0dc2110abd5ce5ae9ee60a2f4b6ef58105ed013449b279c6a4eae140cf"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.155777 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.173307 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" podStartSLOduration=130.173286157 podStartE2EDuration="2m10.173286157s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.140882634 +0000 UTC m=+152.453232719" watchObservedRunningTime="2026-01-27 13:46:34.173286157 +0000 UTC m=+152.485636242" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.173498 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rkhw5" event={"ID":"cb849428-e713-45bf-b3d4-ddd350825372","Type":"ContainerStarted","Data":"07679468d2553a5a5f2a880889b34249eecaf9cda3b30d6436c77e5482ec85f1"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.174845 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2wzjv" podStartSLOduration=130.174822868 podStartE2EDuration="2m10.174822868s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.173222766 +0000 UTC m=+152.485572851" watchObservedRunningTime="2026-01-27 13:46:34.174822868 +0000 UTC m=+152.487172953" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.186532 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlnd8" event={"ID":"628bfff0-2254-4c7c-a8a4-01b2288d8535","Type":"ContainerStarted","Data":"fec5c6e6655049584be55f7b23b4b7ecea7d5273c667d9af1951e6c2f59c6228"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.212876 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" podStartSLOduration=130.212859261 podStartE2EDuration="2m10.212859261s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.210437557 +0000 UTC m=+152.522787642" watchObservedRunningTime="2026-01-27 13:46:34.212859261 +0000 UTC m=+152.525209346" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.214675 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" event={"ID":"411381d4-3ea7-4578-b962-f2629f8ba142","Type":"ContainerStarted","Data":"9f70b8192add3b78f246b4beb6f145b8ecd663e50cc09dd4c636aae876825ceb"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.237079 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:34 crc kubenswrapper[4914]: E0127 13:46:34.238177 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:34.738157448 +0000 UTC m=+153.050507533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.245373 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" event={"ID":"7656576b-aeae-4b15-b2ab-18658770a1e5","Type":"ContainerStarted","Data":"1ae8590c8f2e12da5c32dca51ec53fe0bb5f6771669d305ca3d17672f3571794"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.246262 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.251899 4914 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-w75kk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.251947 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" podUID="7656576b-aeae-4b15-b2ab-18658770a1e5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.284047 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-wnsrs" podStartSLOduration=131.284029316 podStartE2EDuration="2m11.284029316s" podCreationTimestamp="2026-01-27 13:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.246549849 +0000 UTC m=+152.558899934" watchObservedRunningTime="2026-01-27 13:46:34.284029316 +0000 UTC m=+152.596379401" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.285896 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65" podStartSLOduration=130.285890746 podStartE2EDuration="2m10.285890746s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.285160586 +0000 UTC m=+152.597510671" watchObservedRunningTime="2026-01-27 13:46:34.285890746 +0000 UTC m=+152.598240831" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.309276 4914 patch_prober.go:28] interesting pod/router-default-5444994796-vm26v container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.309330 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm26v" podUID="6295fb40-33f8-4b77-8c3e-d36037efa07e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.328094 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"104d6a2a3c103ff68d089c7ed9e5bcbbcb54da0f66c6b56973750a31bc05ab30"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.328155 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"eb6cc6b4239891cb8a8f5ead30fc2e5a6eff2096e367b163238f67dceb95c10d"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.328171 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" event={"ID":"a1a517ef-fa49-4652-9039-30ef3b824353","Type":"ContainerStarted","Data":"23e72d6c5de9133f4433bdf5a85147348bbbfe9bb8a17d59388882af93f95630"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.328184 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" event={"ID":"a1a517ef-fa49-4652-9039-30ef3b824353","Type":"ContainerStarted","Data":"d258a768a961278c8d0f15a5fb92afb94bc2a85d0668d356b3325beaf8983c85"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.328197 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" event={"ID":"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d","Type":"ContainerStarted","Data":"62db313e514cc3ffb05a45edec75a30cb5e2c42f2aace1f48819576419125f98"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.328211 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fxxfj" event={"ID":"42b6b062-22d8-4a87-a0d1-8c24ec3e9637","Type":"ContainerStarted","Data":"cdfba0f8a68b8024a049fee2ed410d6e38e12070f6503b7ab9e31cddd108c3ba"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.328224 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96" event={"ID":"d2118bb9-5519-4158-9acc-e9e434d6da7d","Type":"ContainerStarted","Data":"12009bd2f6cc71070771f4853ed3ae13028b20212cd67aca4abd31ee51a4b335"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.331702 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdxmp" podStartSLOduration=130.331676282 podStartE2EDuration="2m10.331676282s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.326450734 +0000 UTC m=+152.638800839" watchObservedRunningTime="2026-01-27 13:46:34.331676282 +0000 UTC m=+152.644026367" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.336376 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" event={"ID":"08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6","Type":"ContainerStarted","Data":"64ab8fa141fdd0bc5d626bf28bd1b07afbb7255c6482e54f2c2e4089a6bdcf39"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.337288 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.338543 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:34 crc kubenswrapper[4914]: E0127 13:46:34.339918 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:34.839901519 +0000 UTC m=+153.152251644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.343933 4914 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ljqd4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.344185 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" podUID="08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.345274 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" event={"ID":"6038ea14-7eab-4587-942f-acba6a8a100a","Type":"ContainerStarted","Data":"bc02ab74153893b6430807ef570e093280364e6f63be37776f833795faed39c8"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.345330 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" event={"ID":"6038ea14-7eab-4587-942f-acba6a8a100a","Type":"ContainerStarted","Data":"4b5d2d4697a04ee48f294211d0af25d558124c210775144ad7d9fab569c44630"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.350251 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" event={"ID":"589608b0-5454-404a-acd7-f164145a1bc0","Type":"ContainerStarted","Data":"df8451f042b4ad5a49835685bb74d3a1645288a5a7fdb0e7b4684479dc9b7560"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.350585 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" event={"ID":"589608b0-5454-404a-acd7-f164145a1bc0","Type":"ContainerStarted","Data":"6ce727bcdc264d68ba82e36942ec954745896e7ab2efe875ec729afdca50fe65"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.354169 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b4l6h" event={"ID":"6d298bcb-b3cc-4a2d-a963-6930301982e0","Type":"ContainerStarted","Data":"53d59e955677dfcd27db5d2fc575b551aa1579dcc1d3575fafeba091f8b05266"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.358521 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" event={"ID":"8b6ea188-6622-4aaa-b9a1-209ff123514f","Type":"ContainerStarted","Data":"fe924041a883b1abb2434dad25953c2e429db14c53740e64ab397693df0327ba"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.358767 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.364970 4914 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jnmjw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.365016 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" podUID="8b6ea188-6622-4aaa-b9a1-209ff123514f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.377881 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlnd8" podStartSLOduration=130.37785424 podStartE2EDuration="2m10.37785424s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.365549015 +0000 UTC m=+152.677899110" watchObservedRunningTime="2026-01-27 13:46:34.37785424 +0000 UTC m=+152.690204335" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.379874 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-49b2h" event={"ID":"2ab4b4e4-547c-4d23-bdb4-fc7f22902419","Type":"ContainerStarted","Data":"e5732d37418ed5726d03e6acf503f53001645b76119fe4e3462757d085539678"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.395643 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d" event={"ID":"9b8b420d-d475-4d19-84b8-113facbfcf09","Type":"ContainerStarted","Data":"86ccd03d0e4db9d44e648fec5f258ed7c80f5743034f49a6500c2cef34d78417"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.405273 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl" event={"ID":"2106d7e3-6e73-4305-aa0b-e96af20f2a18","Type":"ContainerStarted","Data":"11dccc055d894dc2f7484e5c695c9f464cdf7a4f8b0a51fa755aee5201e7d2e0"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.412283 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" podStartSLOduration=130.412265666 podStartE2EDuration="2m10.412265666s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.410360286 +0000 UTC m=+152.722710371" watchObservedRunningTime="2026-01-27 13:46:34.412265666 +0000 UTC m=+152.724615751" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.418294 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cptz2" event={"ID":"8c4742c3-5115-49f4-85ed-a3eda8373114","Type":"ContainerStarted","Data":"129c861658138524ca8b583333e8612ae1adb64cd480ba3dd8687d154562d3ed"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.418348 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cptz2" event={"ID":"8c4742c3-5115-49f4-85ed-a3eda8373114","Type":"ContainerStarted","Data":"4b202976b4d13c98f97ef9356723ee682e67188269f97adbb93c0ded604de783"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.425859 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" event={"ID":"9899e103-466c-4fb4-887b-916ca5e7ca72","Type":"ContainerStarted","Data":"150eca27117101fa94598b1287348f6dc8977aa07cebfdcfe78b493bc8c68dd6"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.426580 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.428663 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vw28j" event={"ID":"630b7825-f758-4b21-ad2a-f08f54b23dfb","Type":"ContainerStarted","Data":"4b2c9bc7a5693169d6a583bac9ed3123550ebdcc8375eab2075a6e16f7b1421b"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.436312 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" event={"ID":"acfa8e86-e574-4d67-91f7-35a45e1956ee","Type":"ContainerStarted","Data":"2df5c848b7b08b592053336c419ed05454c1fa88bbe4db217c6821f0bd9b5b8a"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.439197 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-f8kzr" event={"ID":"b4b872ac-004a-43d1-b1d1-a12f2ac2f3f4","Type":"ContainerStarted","Data":"390bf1daa961e03c795f400a72729821cf32a9c3dea8330754276e38dce6afcd"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.441348 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:34 crc kubenswrapper[4914]: E0127 13:46:34.442881 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:34.942860843 +0000 UTC m=+153.255210928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.444971 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"24d6aa1ee6f6e3c995b93f7adfa845874f39541fe9134aaeba0c57111f11174a"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.445036 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"affc73b12551a2cc6bbcaf765200c8bf5f7edd0377dcdf4b2f47f3b2fdab2141"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.445725 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.447999 4914 csr.go:261] certificate signing request csr-9d7xf is approved, waiting to be issued Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.453446 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8tqf2" podStartSLOduration=130.45343102 podStartE2EDuration="2m10.45343102s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.45302211 +0000 UTC m=+152.765372195" watchObservedRunningTime="2026-01-27 13:46:34.45343102 +0000 UTC m=+152.765781105" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.453458 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"fc1e95bff01592206e252ad6ebffdc39f0a251f656b003ac9f18a3f2bcfba45b"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.453591 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"fb077a1b50da40386decf130347c32ce1541ea4f0de08de57ffcb10c0d10fac3"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.456644 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8skmg" event={"ID":"c2ae126b-c993-4138-b9a6-dc7a99e9f69e","Type":"ContainerStarted","Data":"acf1caa5b4efa38f8eb5ca6783990e485862fefefbf939c74e590ef6eba59b23"} Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.457055 4914 csr.go:257] certificate signing request csr-9d7xf is issued Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.458530 4914 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-62njv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" start-of-body= Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.458569 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-62njv" podUID="109be131-7cbd-4205-b5c7-eaf7790737f4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.503115 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" podStartSLOduration=130.5030888 podStartE2EDuration="2m10.5030888s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.49853838 +0000 UTC m=+152.810888475" watchObservedRunningTime="2026-01-27 13:46:34.5030888 +0000 UTC m=+152.815438885" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.543091 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:34 crc kubenswrapper[4914]: E0127 13:46:34.546558 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:35.046540665 +0000 UTC m=+153.358890850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.572548 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-49b2h" podStartSLOduration=130.572509629 podStartE2EDuration="2m10.572509629s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.543845154 +0000 UTC m=+152.856195259" watchObservedRunningTime="2026-01-27 13:46:34.572509629 +0000 UTC m=+152.884859714" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.580619 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" podStartSLOduration=131.580569452 podStartE2EDuration="2m11.580569452s" podCreationTimestamp="2026-01-27 13:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.570953388 +0000 UTC m=+152.883303473" watchObservedRunningTime="2026-01-27 13:46:34.580569452 +0000 UTC m=+152.892919547" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.604005 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jsk4m" podStartSLOduration=130.603975878 podStartE2EDuration="2m10.603975878s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.603861935 +0000 UTC m=+152.916212030" watchObservedRunningTime="2026-01-27 13:46:34.603975878 +0000 UTC m=+152.916325963" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.644603 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:34 crc kubenswrapper[4914]: E0127 13:46:34.645108 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:35.145086872 +0000 UTC m=+153.457436957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.660434 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mbz96" podStartSLOduration=130.660416196 podStartE2EDuration="2m10.660416196s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.658875886 +0000 UTC m=+152.971225991" watchObservedRunningTime="2026-01-27 13:46:34.660416196 +0000 UTC m=+152.972766301" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.726586 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw" podStartSLOduration=130.726563189 podStartE2EDuration="2m10.726563189s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.689715729 +0000 UTC m=+153.002065814" watchObservedRunningTime="2026-01-27 13:46:34.726563189 +0000 UTC m=+153.038913274" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.746882 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:34 crc kubenswrapper[4914]: E0127 13:46:34.747223 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:35.247209903 +0000 UTC m=+153.559559988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.785360 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" podStartSLOduration=94.785343499 podStartE2EDuration="1m34.785343499s" podCreationTimestamp="2026-01-27 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.729647701 +0000 UTC m=+153.041997786" watchObservedRunningTime="2026-01-27 13:46:34.785343499 +0000 UTC m=+153.097693584" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.786399 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9mj7d" podStartSLOduration=130.786393106 podStartE2EDuration="2m10.786393106s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.78391056 +0000 UTC m=+153.096260645" watchObservedRunningTime="2026-01-27 13:46:34.786393106 +0000 UTC m=+153.098743191" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.848252 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:34 crc kubenswrapper[4914]: E0127 13:46:34.848490 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:35.348426561 +0000 UTC m=+153.660776646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.848716 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:34 crc kubenswrapper[4914]: E0127 13:46:34.849033 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:35.349018267 +0000 UTC m=+153.661368352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.878354 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2jgl" podStartSLOduration=130.878337819 podStartE2EDuration="2m10.878337819s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.81956431 +0000 UTC m=+153.131914385" watchObservedRunningTime="2026-01-27 13:46:34.878337819 +0000 UTC m=+153.190687924" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.878520 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" podStartSLOduration=130.878516234 podStartE2EDuration="2m10.878516234s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.873387559 +0000 UTC m=+153.185737644" watchObservedRunningTime="2026-01-27 13:46:34.878516234 +0000 UTC m=+153.190866319" Jan 27 13:46:34 crc kubenswrapper[4914]: I0127 13:46:34.949414 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:34 crc kubenswrapper[4914]: E0127 13:46:34.949693 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:35.44967748 +0000 UTC m=+153.762027565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:34.999979 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-6tlvd" podStartSLOduration=130.999962705 podStartE2EDuration="2m10.999962705s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.971132035 +0000 UTC m=+153.283482120" watchObservedRunningTime="2026-01-27 13:46:34.999962705 +0000 UTC m=+153.312312790" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.000627 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hsggd" podStartSLOduration=131.000621852 podStartE2EDuration="2m11.000621852s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:34.998130136 +0000 UTC m=+153.310480221" watchObservedRunningTime="2026-01-27 13:46:35.000621852 +0000 UTC m=+153.312971937" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.051010 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:35 crc kubenswrapper[4914]: E0127 13:46:35.051444 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:35.551428721 +0000 UTC m=+153.863778806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.091909 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8skmg" podStartSLOduration=7.091888157 podStartE2EDuration="7.091888157s" podCreationTimestamp="2026-01-27 13:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:35.090570083 +0000 UTC m=+153.402920168" watchObservedRunningTime="2026-01-27 13:46:35.091888157 +0000 UTC m=+153.404238242" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.093765 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-f8kzr" podStartSLOduration=7.093753816 podStartE2EDuration="7.093753816s" podCreationTimestamp="2026-01-27 13:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:35.045749331 +0000 UTC m=+153.358099416" watchObservedRunningTime="2026-01-27 13:46:35.093753816 +0000 UTC m=+153.406103901" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.152554 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:35 crc kubenswrapper[4914]: E0127 13:46:35.152746 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:35.65271858 +0000 UTC m=+153.965068665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.153209 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:35 crc kubenswrapper[4914]: E0127 13:46:35.153506 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:35.653493491 +0000 UTC m=+153.965843576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.176163 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vw28j" podStartSLOduration=132.176139387 podStartE2EDuration="2m12.176139387s" podCreationTimestamp="2026-01-27 13:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:35.173903659 +0000 UTC m=+153.486253744" watchObservedRunningTime="2026-01-27 13:46:35.176139387 +0000 UTC m=+153.488489472" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.204507 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" podStartSLOduration=132.204487065 podStartE2EDuration="2m12.204487065s" podCreationTimestamp="2026-01-27 13:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:35.200607642 +0000 UTC m=+153.512957717" watchObservedRunningTime="2026-01-27 13:46:35.204487065 +0000 UTC m=+153.516837160" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.254346 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:35 crc kubenswrapper[4914]: E0127 13:46:35.254488 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:35.754469512 +0000 UTC m=+154.066819597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.254531 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:35 crc kubenswrapper[4914]: E0127 13:46:35.254844 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:35.754821802 +0000 UTC m=+154.067171887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.280176 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cptz2" podStartSLOduration=131.28015602 podStartE2EDuration="2m11.28015602s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:35.277763836 +0000 UTC m=+153.590113921" watchObservedRunningTime="2026-01-27 13:46:35.28015602 +0000 UTC m=+153.592506105" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.311779 4914 patch_prober.go:28] interesting pod/router-default-5444994796-vm26v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 13:46:35 crc kubenswrapper[4914]: [-]has-synced failed: reason withheld Jan 27 13:46:35 crc kubenswrapper[4914]: [+]process-running ok Jan 27 13:46:35 crc kubenswrapper[4914]: healthz check failed Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.311872 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm26v" podUID="6295fb40-33f8-4b77-8c3e-d36037efa07e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.356038 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:35 crc kubenswrapper[4914]: E0127 13:46:35.356252 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:35.856206684 +0000 UTC m=+154.168556759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.356408 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:35 crc kubenswrapper[4914]: E0127 13:46:35.356786 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:35.856770078 +0000 UTC m=+154.169120163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.457298 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:35 crc kubenswrapper[4914]: E0127 13:46:35.457654 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:35.957633517 +0000 UTC m=+154.269983612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.458130 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 13:41:34 +0000 UTC, rotation deadline is 2026-11-15 11:20:12.640374406 +0000 UTC Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.458165 4914 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7005h33m37.182211266s for next certificate rotation Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.462007 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9vdw" event={"ID":"a02a14aa-070b-42e4-a44b-bb6bd50e03b1","Type":"ContainerStarted","Data":"53513bd95f07e94ee7c7ea0e790f989333ce5c91a61e0e48f3f5fc0870209e06"} Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.463340 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b4l6h" event={"ID":"6d298bcb-b3cc-4a2d-a963-6930301982e0","Type":"ContainerStarted","Data":"dbea880881a8d28e1f3109499db2f9a3f1e03c506af955ccac0c62b66890ee06"} Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.466011 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rkhw5" event={"ID":"cb849428-e713-45bf-b3d4-ddd350825372","Type":"ContainerStarted","Data":"f630c61d2b4a52210e2b143cdf9c813bacecca3b4ac55998637e3137fee7d0a8"} Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.468199 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fxxfj" event={"ID":"42b6b062-22d8-4a87-a0d1-8c24ec3e9637","Type":"ContainerStarted","Data":"5624fd4f139482f9ba40cc411ea45ecc3c32a77ba0cff3891c378376ec7c4f38"} Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.468726 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-fxxfj" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.470943 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" event={"ID":"543ed275-142d-4301-a0c2-33a99233ee0d","Type":"ContainerStarted","Data":"be3c83fa3975edd4410ba946e2d3385ffe0bae8a5b876502e01ad6a2a8f751fe"} Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.472537 4914 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-w75kk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.472602 4914 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jnmjw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.472624 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" podUID="7656576b-aeae-4b15-b2ab-18658770a1e5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.472643 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" podUID="8b6ea188-6622-4aaa-b9a1-209ff123514f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.472611 4914 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ssxkp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.472696 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" podUID="97635386-64c1-4f78-88f2-17faa845b6b2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.472970 4914 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ljqd4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.473016 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" podUID="08d4f0d9-018d-4a07-92ce-bb8b90d0bbf6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.492511 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-b4l6h" podStartSLOduration=131.492482425 podStartE2EDuration="2m11.492482425s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:35.490210796 +0000 UTC m=+153.802560881" watchObservedRunningTime="2026-01-27 13:46:35.492482425 +0000 UTC m=+153.804832510" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.520409 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-fxxfj" podStartSLOduration=7.52038833 podStartE2EDuration="7.52038833s" podCreationTimestamp="2026-01-27 13:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:35.517619767 +0000 UTC m=+153.829969852" watchObservedRunningTime="2026-01-27 13:46:35.52038833 +0000 UTC m=+153.832738415" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.551942 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-rkhw5" podStartSLOduration=131.551921642 podStartE2EDuration="2m11.551921642s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:35.550486944 +0000 UTC m=+153.862837029" watchObservedRunningTime="2026-01-27 13:46:35.551921642 +0000 UTC m=+153.864271727" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.558541 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:35 crc kubenswrapper[4914]: E0127 13:46:35.559063 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.059041419 +0000 UTC m=+154.371391544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.583154 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" podStartSLOduration=131.583138455 podStartE2EDuration="2m11.583138455s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:35.579763316 +0000 UTC m=+153.892113401" watchObservedRunningTime="2026-01-27 13:46:35.583138455 +0000 UTC m=+153.895488530" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.593091 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.593427 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.594894 4914 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4jdfm container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.20:8443/livez\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.594941 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" podUID="589608b0-5454-404a-acd7-f164145a1bc0" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.20:8443/livez\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.661211 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:35 crc kubenswrapper[4914]: E0127 13:46:35.661410 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.161387277 +0000 UTC m=+154.473737362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.661497 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:35 crc kubenswrapper[4914]: E0127 13:46:35.661869 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.161849329 +0000 UTC m=+154.474199414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.762844 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:35 crc kubenswrapper[4914]: E0127 13:46:35.763275 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.26322276 +0000 UTC m=+154.575572845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.864602 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:35 crc kubenswrapper[4914]: E0127 13:46:35.865576 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.365557988 +0000 UTC m=+154.677908083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.894119 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.894172 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.895574 4914 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-2lg87 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.895646 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" podUID="543ed275-142d-4301-a0c2-33a99233ee0d" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.966548 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:35 crc kubenswrapper[4914]: E0127 13:46:35.966682 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.466662962 +0000 UTC m=+154.779013047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:35 crc kubenswrapper[4914]: I0127 13:46:35.966923 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:35 crc kubenswrapper[4914]: E0127 13:46:35.967232 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.467221817 +0000 UTC m=+154.779571902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.067819 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.068052 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.568017553 +0000 UTC m=+154.880367648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.068228 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.068585 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.568571528 +0000 UTC m=+154.880921613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.169578 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.169796 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.669761974 +0000 UTC m=+154.982112079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.169899 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.170283 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.670271058 +0000 UTC m=+154.982621143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.271423 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.271594 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.771571468 +0000 UTC m=+155.083921563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.271728 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.272060 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.772048711 +0000 UTC m=+155.084398796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.312727 4914 patch_prober.go:28] interesting pod/router-default-5444994796-vm26v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 13:46:36 crc kubenswrapper[4914]: [-]has-synced failed: reason withheld Jan 27 13:46:36 crc kubenswrapper[4914]: [+]process-running ok Jan 27 13:46:36 crc kubenswrapper[4914]: healthz check failed Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.312803 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm26v" podUID="6295fb40-33f8-4b77-8c3e-d36037efa07e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.373272 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.373516 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.873458083 +0000 UTC m=+155.185808168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.373604 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.373914 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.873900255 +0000 UTC m=+155.186250340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.474635 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.474854 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.974775173 +0000 UTC m=+155.287125258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.475030 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.475355 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:36.975343728 +0000 UTC m=+155.287693813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.477583 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" event={"ID":"1e9e06d9-6682-4f7f-a4a0-36414213490b","Type":"ContainerStarted","Data":"2e8c2cf305607d11eba5e4013c47408e1e0258f08f71273d8c2d8a697be7274c"} Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.478206 4914 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-w75kk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.478257 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" podUID="7656576b-aeae-4b15-b2ab-18658770a1e5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.491433 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljqd4" Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.576546 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.578137 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:37.078115107 +0000 UTC m=+155.390465202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.678532 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.678954 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:37.178939054 +0000 UTC m=+155.491289139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.780012 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.780115 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:37.28009866 +0000 UTC m=+155.592448745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.780350 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.780692 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:37.280684945 +0000 UTC m=+155.593035030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.882100 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.882232 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:37.382206571 +0000 UTC m=+155.694556656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.882615 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.883094 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:37.383059474 +0000 UTC m=+155.695409559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.983914 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.984120 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:37.484086766 +0000 UTC m=+155.796436851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:36 crc kubenswrapper[4914]: I0127 13:46:36.984206 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:36 crc kubenswrapper[4914]: E0127 13:46:36.984499 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:37.484485957 +0000 UTC m=+155.796836042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.084952 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:37 crc kubenswrapper[4914]: E0127 13:46:37.085143 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:37.585115049 +0000 UTC m=+155.897465144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.085283 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:37 crc kubenswrapper[4914]: E0127 13:46:37.085576 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:37.585564271 +0000 UTC m=+155.897914356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.186170 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:37 crc kubenswrapper[4914]: E0127 13:46:37.186481 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:37.68646583 +0000 UTC m=+155.998815915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.287792 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:37 crc kubenswrapper[4914]: E0127 13:46:37.288252 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:37.788231932 +0000 UTC m=+156.100582037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.310156 4914 patch_prober.go:28] interesting pod/router-default-5444994796-vm26v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 13:46:37 crc kubenswrapper[4914]: [-]has-synced failed: reason withheld Jan 27 13:46:37 crc kubenswrapper[4914]: [+]process-running ok Jan 27 13:46:37 crc kubenswrapper[4914]: healthz check failed Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.310220 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm26v" podUID="6295fb40-33f8-4b77-8c3e-d36037efa07e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.388579 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:37 crc kubenswrapper[4914]: E0127 13:46:37.388989 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:37.888973337 +0000 UTC m=+156.201323422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.419779 4914 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-k9l4r container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.419894 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" podUID="9899e103-466c-4fb4-887b-916ca5e7ca72" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.419803 4914 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-k9l4r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.420130 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" podUID="9899e103-466c-4fb4-887b-916ca5e7ca72" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.478470 4914 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ssxkp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.478622 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" podUID="97635386-64c1-4f78-88f2-17faa845b6b2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.490788 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:37 crc kubenswrapper[4914]: E0127 13:46:37.491333 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:37.991320134 +0000 UTC m=+156.303670219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.513039 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lgzjm"] Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.514758 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:46:37 crc kubenswrapper[4914]: W0127 13:46:37.524263 4914 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: secrets "community-operators-dockercfg-dmngl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 27 13:46:37 crc kubenswrapper[4914]: E0127 13:46:37.524313 4914 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"community-operators-dockercfg-dmngl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.548326 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lgzjm"] Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.592770 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.593084 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g25c2\" (UniqueName: \"kubernetes.io/projected/02984395-bee4-40bd-98ab-2bf03009bb9f-kube-api-access-g25c2\") pod \"community-operators-lgzjm\" (UID: \"02984395-bee4-40bd-98ab-2bf03009bb9f\") " pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.593280 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02984395-bee4-40bd-98ab-2bf03009bb9f-catalog-content\") pod \"community-operators-lgzjm\" (UID: \"02984395-bee4-40bd-98ab-2bf03009bb9f\") " pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.593309 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02984395-bee4-40bd-98ab-2bf03009bb9f-utilities\") pod \"community-operators-lgzjm\" (UID: \"02984395-bee4-40bd-98ab-2bf03009bb9f\") " pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:46:37 crc kubenswrapper[4914]: E0127 13:46:37.593562 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:38.093545138 +0000 UTC m=+156.405895223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.632263 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pkm2z"] Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.634316 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.647689 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.666197 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pkm2z"] Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.691647 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.691722 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.694959 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g25c2\" (UniqueName: \"kubernetes.io/projected/02984395-bee4-40bd-98ab-2bf03009bb9f-kube-api-access-g25c2\") pod \"community-operators-lgzjm\" (UID: \"02984395-bee4-40bd-98ab-2bf03009bb9f\") " pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.695026 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-catalog-content\") pod \"certified-operators-pkm2z\" (UID: \"dc6a9b51-d0a6-4370-94bd-342dcfa54a99\") " pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.695048 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6frj\" (UniqueName: \"kubernetes.io/projected/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-kube-api-access-m6frj\") pod \"certified-operators-pkm2z\" (UID: \"dc6a9b51-d0a6-4370-94bd-342dcfa54a99\") " pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.695081 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02984395-bee4-40bd-98ab-2bf03009bb9f-catalog-content\") pod \"community-operators-lgzjm\" (UID: \"02984395-bee4-40bd-98ab-2bf03009bb9f\") " pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.695106 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-utilities\") pod \"certified-operators-pkm2z\" (UID: \"dc6a9b51-d0a6-4370-94bd-342dcfa54a99\") " pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.695124 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02984395-bee4-40bd-98ab-2bf03009bb9f-utilities\") pod \"community-operators-lgzjm\" (UID: \"02984395-bee4-40bd-98ab-2bf03009bb9f\") " pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.695157 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:37 crc kubenswrapper[4914]: E0127 13:46:37.695452 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:38.195439454 +0000 UTC m=+156.507789539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.696021 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02984395-bee4-40bd-98ab-2bf03009bb9f-catalog-content\") pod \"community-operators-lgzjm\" (UID: \"02984395-bee4-40bd-98ab-2bf03009bb9f\") " pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.696249 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02984395-bee4-40bd-98ab-2bf03009bb9f-utilities\") pod \"community-operators-lgzjm\" (UID: \"02984395-bee4-40bd-98ab-2bf03009bb9f\") " pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.745609 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g25c2\" (UniqueName: \"kubernetes.io/projected/02984395-bee4-40bd-98ab-2bf03009bb9f-kube-api-access-g25c2\") pod \"community-operators-lgzjm\" (UID: \"02984395-bee4-40bd-98ab-2bf03009bb9f\") " pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.760482 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k9l4r" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.796569 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:37 crc kubenswrapper[4914]: E0127 13:46:37.796738 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:38.296711153 +0000 UTC m=+156.609061238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.796809 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.796890 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-catalog-content\") pod \"certified-operators-pkm2z\" (UID: \"dc6a9b51-d0a6-4370-94bd-342dcfa54a99\") " pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.796911 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6frj\" (UniqueName: \"kubernetes.io/projected/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-kube-api-access-m6frj\") pod \"certified-operators-pkm2z\" (UID: \"dc6a9b51-d0a6-4370-94bd-342dcfa54a99\") " pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.796944 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-utilities\") pod \"certified-operators-pkm2z\" (UID: \"dc6a9b51-d0a6-4370-94bd-342dcfa54a99\") " pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:46:37 crc kubenswrapper[4914]: E0127 13:46:37.797133 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:38.297124184 +0000 UTC m=+156.609474269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.797463 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-catalog-content\") pod \"certified-operators-pkm2z\" (UID: \"dc6a9b51-d0a6-4370-94bd-342dcfa54a99\") " pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.797515 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-utilities\") pod \"certified-operators-pkm2z\" (UID: \"dc6a9b51-d0a6-4370-94bd-342dcfa54a99\") " pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.827584 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6frj\" (UniqueName: \"kubernetes.io/projected/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-kube-api-access-m6frj\") pod \"certified-operators-pkm2z\" (UID: \"dc6a9b51-d0a6-4370-94bd-342dcfa54a99\") " pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.834235 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r4lbv"] Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.835195 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.870243 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r4lbv"] Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.898370 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:37 crc kubenswrapper[4914]: E0127 13:46:37.898554 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:38.398530646 +0000 UTC m=+156.710880731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.898662 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-catalog-content\") pod \"community-operators-r4lbv\" (UID: \"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d\") " pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.898765 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-utilities\") pod \"community-operators-r4lbv\" (UID: \"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d\") " pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.898820 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxd8x\" (UniqueName: \"kubernetes.io/projected/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-kube-api-access-lxd8x\") pod \"community-operators-r4lbv\" (UID: \"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d\") " pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.899107 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:37 crc kubenswrapper[4914]: E0127 13:46:37.899419 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:38.39940387 +0000 UTC m=+156.711753955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:37 crc kubenswrapper[4914]: I0127 13:46:37.946068 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:37.999977 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:38 crc kubenswrapper[4914]: E0127 13:46:38.000165 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:38.500148035 +0000 UTC m=+156.812498130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.000212 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-catalog-content\") pod \"community-operators-r4lbv\" (UID: \"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d\") " pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.000250 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-utilities\") pod \"community-operators-r4lbv\" (UID: \"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d\") " pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.000274 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxd8x\" (UniqueName: \"kubernetes.io/projected/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-kube-api-access-lxd8x\") pod \"community-operators-r4lbv\" (UID: \"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d\") " pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.000315 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:38 crc kubenswrapper[4914]: E0127 13:46:38.000582 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:38.500575126 +0000 UTC m=+156.812925221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.000752 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-catalog-content\") pod \"community-operators-r4lbv\" (UID: \"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d\") " pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.000803 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-utilities\") pod \"community-operators-r4lbv\" (UID: \"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d\") " pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.020469 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8q7ck"] Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.021677 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.036789 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxd8x\" (UniqueName: \"kubernetes.io/projected/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-kube-api-access-lxd8x\") pod \"community-operators-r4lbv\" (UID: \"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d\") " pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.037375 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8q7ck"] Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.107549 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:38 crc kubenswrapper[4914]: E0127 13:46:38.107715 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:38.607688929 +0000 UTC m=+156.920039014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.107855 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.107908 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2a8a9b-5334-4de2-9198-7677a52f8002-catalog-content\") pod \"certified-operators-8q7ck\" (UID: \"1f2a8a9b-5334-4de2-9198-7677a52f8002\") " pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.108028 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qbl7\" (UniqueName: \"kubernetes.io/projected/1f2a8a9b-5334-4de2-9198-7677a52f8002-kube-api-access-7qbl7\") pod \"certified-operators-8q7ck\" (UID: \"1f2a8a9b-5334-4de2-9198-7677a52f8002\") " pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.108099 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2a8a9b-5334-4de2-9198-7677a52f8002-utilities\") pod \"certified-operators-8q7ck\" (UID: \"1f2a8a9b-5334-4de2-9198-7677a52f8002\") " pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:46:38 crc kubenswrapper[4914]: E0127 13:46:38.108278 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:38.608270624 +0000 UTC m=+156.920620709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.210460 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.210899 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2a8a9b-5334-4de2-9198-7677a52f8002-catalog-content\") pod \"certified-operators-8q7ck\" (UID: \"1f2a8a9b-5334-4de2-9198-7677a52f8002\") " pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.210932 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qbl7\" (UniqueName: \"kubernetes.io/projected/1f2a8a9b-5334-4de2-9198-7677a52f8002-kube-api-access-7qbl7\") pod \"certified-operators-8q7ck\" (UID: \"1f2a8a9b-5334-4de2-9198-7677a52f8002\") " pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.210958 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2a8a9b-5334-4de2-9198-7677a52f8002-utilities\") pod \"certified-operators-8q7ck\" (UID: \"1f2a8a9b-5334-4de2-9198-7677a52f8002\") " pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.211389 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2a8a9b-5334-4de2-9198-7677a52f8002-utilities\") pod \"certified-operators-8q7ck\" (UID: \"1f2a8a9b-5334-4de2-9198-7677a52f8002\") " pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:46:38 crc kubenswrapper[4914]: E0127 13:46:38.211451 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:38.711436643 +0000 UTC m=+157.023786728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.211661 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2a8a9b-5334-4de2-9198-7677a52f8002-catalog-content\") pod \"certified-operators-8q7ck\" (UID: \"1f2a8a9b-5334-4de2-9198-7677a52f8002\") " pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.239152 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qbl7\" (UniqueName: \"kubernetes.io/projected/1f2a8a9b-5334-4de2-9198-7677a52f8002-kube-api-access-7qbl7\") pod \"certified-operators-8q7ck\" (UID: \"1f2a8a9b-5334-4de2-9198-7677a52f8002\") " pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.322265 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:38 crc kubenswrapper[4914]: E0127 13:46:38.322705 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:38.822688085 +0000 UTC m=+157.135038180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.334985 4914 patch_prober.go:28] interesting pod/router-default-5444994796-vm26v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 13:46:38 crc kubenswrapper[4914]: [-]has-synced failed: reason withheld Jan 27 13:46:38 crc kubenswrapper[4914]: [+]process-running ok Jan 27 13:46:38 crc kubenswrapper[4914]: healthz check failed Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.335043 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm26v" podUID="6295fb40-33f8-4b77-8c3e-d36037efa07e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.367670 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.423729 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:38 crc kubenswrapper[4914]: E0127 13:46:38.424016 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:38.924000195 +0000 UTC m=+157.236350280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.454336 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pkm2z"] Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.517860 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pkm2z" event={"ID":"dc6a9b51-d0a6-4370-94bd-342dcfa54a99","Type":"ContainerStarted","Data":"a2badf41b5f42b65537d0a64298f09822fe0b006f6f9c90c91192a1563880658"} Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.525783 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:38 crc kubenswrapper[4914]: E0127 13:46:38.526412 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:39.026397933 +0000 UTC m=+157.338748028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.627384 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:38 crc kubenswrapper[4914]: E0127 13:46:38.627672 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:39.127652193 +0000 UTC m=+157.440002278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.728689 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:38 crc kubenswrapper[4914]: E0127 13:46:38.728985 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:39.228973663 +0000 UTC m=+157.541323748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.766649 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8q7ck"] Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.829665 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:38 crc kubenswrapper[4914]: E0127 13:46:38.829954 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:39.329938684 +0000 UTC m=+157.642288769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.844912 4914 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/community-operators-lgzjm" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.845000 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:46:38 crc kubenswrapper[4914]: I0127 13:46:38.934737 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:38 crc kubenswrapper[4914]: E0127 13:46:38.935214 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:39.435197757 +0000 UTC m=+157.747547842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.035406 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:39 crc kubenswrapper[4914]: E0127 13:46:39.036116 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:39.536094016 +0000 UTC m=+157.848444101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.092558 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.098220 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.137632 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:39 crc kubenswrapper[4914]: E0127 13:46:39.138037 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:39.638019783 +0000 UTC m=+157.950369868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.238485 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:39 crc kubenswrapper[4914]: E0127 13:46:39.238992 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:39.738973924 +0000 UTC m=+158.051324019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.316041 4914 patch_prober.go:28] interesting pod/router-default-5444994796-vm26v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 13:46:39 crc kubenswrapper[4914]: [-]has-synced failed: reason withheld Jan 27 13:46:39 crc kubenswrapper[4914]: [+]process-running ok Jan 27 13:46:39 crc kubenswrapper[4914]: healthz check failed Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.316102 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm26v" podUID="6295fb40-33f8-4b77-8c3e-d36037efa07e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.343663 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:39 crc kubenswrapper[4914]: E0127 13:46:39.344087 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:39.844070803 +0000 UTC m=+158.156420888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.444697 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:39 crc kubenswrapper[4914]: E0127 13:46:39.444909 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:39.94487984 +0000 UTC m=+158.257229925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.445328 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:39 crc kubenswrapper[4914]: E0127 13:46:39.445582 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:39.945570559 +0000 UTC m=+158.257920644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.533508 4914 generic.go:334] "Generic (PLEG): container finished" podID="dc6a9b51-d0a6-4370-94bd-342dcfa54a99" containerID="953a306865315e87452db34b5179322bec808bb6108b207635b49dc7e4dfcf00" exitCode=0 Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.533573 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pkm2z" event={"ID":"dc6a9b51-d0a6-4370-94bd-342dcfa54a99","Type":"ContainerDied","Data":"953a306865315e87452db34b5179322bec808bb6108b207635b49dc7e4dfcf00"} Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.538507 4914 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.540997 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" event={"ID":"1e9e06d9-6682-4f7f-a4a0-36414213490b","Type":"ContainerStarted","Data":"0dc1d3628e2e463260c372ce725038b361aa05049494fc88b8eb2663d53efa69"} Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.546327 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:39 crc kubenswrapper[4914]: E0127 13:46:39.546717 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:40.046700944 +0000 UTC m=+158.359051029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.551939 4914 generic.go:334] "Generic (PLEG): container finished" podID="1f2a8a9b-5334-4de2-9198-7677a52f8002" containerID="34239dcd89cfaf4e1cf48148113709125ae8f51ed16f7bc222bfb4197ed96dc7" exitCode=0 Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.551986 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8q7ck" event={"ID":"1f2a8a9b-5334-4de2-9198-7677a52f8002","Type":"ContainerDied","Data":"34239dcd89cfaf4e1cf48148113709125ae8f51ed16f7bc222bfb4197ed96dc7"} Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.552012 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8q7ck" event={"ID":"1f2a8a9b-5334-4de2-9198-7677a52f8002","Type":"ContainerStarted","Data":"1a6f82b4a8ae6d0fa3568f02c0861f4cd98891d44a15681759989e4b4661a1d7"} Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.591503 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lgzjm"] Jan 27 13:46:39 crc kubenswrapper[4914]: W0127 13:46:39.594196 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02984395_bee4_40bd_98ab_2bf03009bb9f.slice/crio-890770ee8db7e33ba7dfa78e1c2d1de3690d432d8bd59aa6c8150a8c6433f400 WatchSource:0}: Error finding container 890770ee8db7e33ba7dfa78e1c2d1de3690d432d8bd59aa6c8150a8c6433f400: Status 404 returned error can't find the container with id 890770ee8db7e33ba7dfa78e1c2d1de3690d432d8bd59aa6c8150a8c6433f400 Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.631281 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nfpmr"] Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.632527 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.651145 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.651168 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r4lbv"] Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.651761 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:39 crc kubenswrapper[4914]: E0127 13:46:39.652078 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:40.1520673 +0000 UTC m=+158.464417375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.664687 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfpmr"] Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.756323 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.756552 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-catalog-content\") pod \"redhat-marketplace-nfpmr\" (UID: \"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab\") " pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.756656 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-utilities\") pod \"redhat-marketplace-nfpmr\" (UID: \"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab\") " pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.756710 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxgxx\" (UniqueName: \"kubernetes.io/projected/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-kube-api-access-dxgxx\") pod \"redhat-marketplace-nfpmr\" (UID: \"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab\") " pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:46:39 crc kubenswrapper[4914]: E0127 13:46:39.756922 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:40.256901284 +0000 UTC m=+158.569251369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.858693 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-catalog-content\") pod \"redhat-marketplace-nfpmr\" (UID: \"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab\") " pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.859042 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-utilities\") pod \"redhat-marketplace-nfpmr\" (UID: \"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab\") " pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.859088 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxgxx\" (UniqueName: \"kubernetes.io/projected/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-kube-api-access-dxgxx\") pod \"redhat-marketplace-nfpmr\" (UID: \"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab\") " pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.859136 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:39 crc kubenswrapper[4914]: E0127 13:46:39.859445 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:40.359430566 +0000 UTC m=+158.671780661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.863640 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-utilities\") pod \"redhat-marketplace-nfpmr\" (UID: \"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab\") " pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.864114 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-catalog-content\") pod \"redhat-marketplace-nfpmr\" (UID: \"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab\") " pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.910943 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxgxx\" (UniqueName: \"kubernetes.io/projected/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-kube-api-access-dxgxx\") pod \"redhat-marketplace-nfpmr\" (UID: \"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab\") " pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:46:39 crc kubenswrapper[4914]: I0127 13:46:39.960537 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:39 crc kubenswrapper[4914]: E0127 13:46:39.961021 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:40.460987782 +0000 UTC m=+158.773337867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.014715 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.025331 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nff6l"] Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.027441 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.057659 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nff6l"] Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.061755 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3080a558-3dff-475d-a18d-c9660c4a1b47-catalog-content\") pod \"redhat-marketplace-nff6l\" (UID: \"3080a558-3dff-475d-a18d-c9660c4a1b47\") " pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.061789 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3080a558-3dff-475d-a18d-c9660c4a1b47-utilities\") pod \"redhat-marketplace-nff6l\" (UID: \"3080a558-3dff-475d-a18d-c9660c4a1b47\") " pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.061818 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:40 crc kubenswrapper[4914]: E0127 13:46:40.062421 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:40.562398045 +0000 UTC m=+158.874748210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.065925 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6skk\" (UniqueName: \"kubernetes.io/projected/3080a558-3dff-475d-a18d-c9660c4a1b47-kube-api-access-m6skk\") pod \"redhat-marketplace-nff6l\" (UID: \"3080a558-3dff-475d-a18d-c9660c4a1b47\") " pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.143650 4914 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.166816 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.167170 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3080a558-3dff-475d-a18d-c9660c4a1b47-catalog-content\") pod \"redhat-marketplace-nff6l\" (UID: \"3080a558-3dff-475d-a18d-c9660c4a1b47\") " pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.167204 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3080a558-3dff-475d-a18d-c9660c4a1b47-utilities\") pod \"redhat-marketplace-nff6l\" (UID: \"3080a558-3dff-475d-a18d-c9660c4a1b47\") " pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.167271 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6skk\" (UniqueName: \"kubernetes.io/projected/3080a558-3dff-475d-a18d-c9660c4a1b47-kube-api-access-m6skk\") pod \"redhat-marketplace-nff6l\" (UID: \"3080a558-3dff-475d-a18d-c9660c4a1b47\") " pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:46:40 crc kubenswrapper[4914]: E0127 13:46:40.167468 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:40.667444683 +0000 UTC m=+158.979794768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.167867 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3080a558-3dff-475d-a18d-c9660c4a1b47-catalog-content\") pod \"redhat-marketplace-nff6l\" (UID: \"3080a558-3dff-475d-a18d-c9660c4a1b47\") " pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.169208 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3080a558-3dff-475d-a18d-c9660c4a1b47-utilities\") pod \"redhat-marketplace-nff6l\" (UID: \"3080a558-3dff-475d-a18d-c9660c4a1b47\") " pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.191470 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6skk\" (UniqueName: \"kubernetes.io/projected/3080a558-3dff-475d-a18d-c9660c4a1b47-kube-api-access-m6skk\") pod \"redhat-marketplace-nff6l\" (UID: \"3080a558-3dff-475d-a18d-c9660c4a1b47\") " pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.269415 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:40 crc kubenswrapper[4914]: E0127 13:46:40.269976 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:40.769906264 +0000 UTC m=+159.082256349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.279059 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfpmr"] Jan 27 13:46:40 crc kubenswrapper[4914]: W0127 13:46:40.291487 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1bbb3b4_c5a3_4939_bf5e_9d6c525ac6ab.slice/crio-e9750e9124154fcd0a987dbe655aeec48f6bb474c9dc4797ac61f938943ecb2a WatchSource:0}: Error finding container e9750e9124154fcd0a987dbe655aeec48f6bb474c9dc4797ac61f938943ecb2a: Status 404 returned error can't find the container with id e9750e9124154fcd0a987dbe655aeec48f6bb474c9dc4797ac61f938943ecb2a Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.302576 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.303188 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.305878 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.306168 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.312606 4914 patch_prober.go:28] interesting pod/router-default-5444994796-vm26v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 13:46:40 crc kubenswrapper[4914]: [-]has-synced failed: reason withheld Jan 27 13:46:40 crc kubenswrapper[4914]: [+]process-running ok Jan 27 13:46:40 crc kubenswrapper[4914]: healthz check failed Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.312703 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm26v" podUID="6295fb40-33f8-4b77-8c3e-d36037efa07e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.312893 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.366893 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.370924 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:40 crc kubenswrapper[4914]: E0127 13:46:40.371050 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:40.871019979 +0000 UTC m=+159.183370064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.371756 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.371987 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7eb27548-790e-4e9a-b663-9cd584e5e476-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7eb27548-790e-4e9a-b663-9cd584e5e476\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.372059 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7eb27548-790e-4e9a-b663-9cd584e5e476-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7eb27548-790e-4e9a-b663-9cd584e5e476\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:46:40 crc kubenswrapper[4914]: E0127 13:46:40.372616 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:40.872581259 +0000 UTC m=+159.184931334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.473301 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:40 crc kubenswrapper[4914]: E0127 13:46:40.473568 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:40.97353149 +0000 UTC m=+159.285881585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.473631 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7eb27548-790e-4e9a-b663-9cd584e5e476-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7eb27548-790e-4e9a-b663-9cd584e5e476\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.473805 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7eb27548-790e-4e9a-b663-9cd584e5e476-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7eb27548-790e-4e9a-b663-9cd584e5e476\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.474117 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.474196 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7eb27548-790e-4e9a-b663-9cd584e5e476-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7eb27548-790e-4e9a-b663-9cd584e5e476\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:46:40 crc kubenswrapper[4914]: E0127 13:46:40.474629 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:46:40.974620049 +0000 UTC m=+159.286970134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rhxcj" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.495581 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.495642 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.498596 4914 patch_prober.go:28] interesting pod/console-f9d7485db-rktkr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.498672 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rktkr" podUID="90260720-9ce0-4da9-932b-34f7ce235091" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.514146 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7eb27548-790e-4e9a-b663-9cd584e5e476-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7eb27548-790e-4e9a-b663-9cd584e5e476\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.545093 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.576050 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:40 crc kubenswrapper[4914]: E0127 13:46:40.577197 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:46:41.07713189 +0000 UTC m=+159.389482015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.589247 4914 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T13:46:40.143874922Z","Handler":null,"Name":""} Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.600418 4914 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.600449 4914 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.604210 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.605061 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" event={"ID":"1e9e06d9-6682-4f7f-a4a0-36414213490b","Type":"ContainerStarted","Data":"a31ae9a7ec5c7ba49407b37f0fcc2352fb080c1e1a74fbbdd202aa83066b4ffc"} Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.605090 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" event={"ID":"1e9e06d9-6682-4f7f-a4a0-36414213490b","Type":"ContainerStarted","Data":"bdf2b425e4f53b9ad7c4d46e1c5437fbe818803a44519fe8f1b018e3d912dcf9"} Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.607976 4914 generic.go:334] "Generic (PLEG): container finished" podID="02984395-bee4-40bd-98ab-2bf03009bb9f" containerID="b3528b0aa71be1bd716f42ca84a51d4e70cf49bbcd0f06da9e797088658d81b3" exitCode=0 Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.608027 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgzjm" event={"ID":"02984395-bee4-40bd-98ab-2bf03009bb9f","Type":"ContainerDied","Data":"b3528b0aa71be1bd716f42ca84a51d4e70cf49bbcd0f06da9e797088658d81b3"} Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.608048 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgzjm" event={"ID":"02984395-bee4-40bd-98ab-2bf03009bb9f","Type":"ContainerStarted","Data":"890770ee8db7e33ba7dfa78e1c2d1de3690d432d8bd59aa6c8150a8c6433f400"} Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.608589 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-4jdfm" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.621933 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.623098 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zg4gg"] Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.624884 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.627409 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.628550 4914 generic.go:334] "Generic (PLEG): container finished" podID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" containerID="ac1e0d17b1e3228aa363cbb9dec5bbfa029f5d49fa96a14439884f0313108016" exitCode=0 Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.629270 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4lbv" event={"ID":"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d","Type":"ContainerDied","Data":"ac1e0d17b1e3228aa363cbb9dec5bbfa029f5d49fa96a14439884f0313108016"} Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.629295 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4lbv" event={"ID":"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d","Type":"ContainerStarted","Data":"88c150bb71f700224650635b84862a90af98461a2e569e523d75ace776e575f1"} Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.639692 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gzlfj" podStartSLOduration=12.639672169 podStartE2EDuration="12.639672169s" podCreationTimestamp="2026-01-27 13:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:40.629781158 +0000 UTC m=+158.942131233" watchObservedRunningTime="2026-01-27 13:46:40.639672169 +0000 UTC m=+158.952022244" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.643262 4914 generic.go:334] "Generic (PLEG): container finished" podID="a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" containerID="ca3ad87478d297fa47635788c287edca8840f9dfc050395c2f79e34fc72cba82" exitCode=0 Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.643356 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfpmr" event={"ID":"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab","Type":"ContainerDied","Data":"ca3ad87478d297fa47635788c287edca8840f9dfc050395c2f79e34fc72cba82"} Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.643389 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfpmr" event={"ID":"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab","Type":"ContainerStarted","Data":"e9750e9124154fcd0a987dbe655aeec48f6bb474c9dc4797ac61f938943ecb2a"} Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.647211 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zg4gg"] Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.650401 4914 generic.go:334] "Generic (PLEG): container finished" podID="c42d7bb3-fffc-4dd8-bc41-151b5b2df45d" containerID="62db313e514cc3ffb05a45edec75a30cb5e2c42f2aace1f48819576419125f98" exitCode=0 Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.650450 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" event={"ID":"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d","Type":"ContainerDied","Data":"62db313e514cc3ffb05a45edec75a30cb5e2c42f2aace1f48819576419125f98"} Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.677638 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-utilities\") pod \"redhat-operators-zg4gg\" (UID: \"d6cc3d29-abfa-4a3e-8251-5811e2bab91e\") " pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.677698 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-catalog-content\") pod \"redhat-operators-zg4gg\" (UID: \"d6cc3d29-abfa-4a3e-8251-5811e2bab91e\") " pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.677843 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.677921 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgvgs\" (UniqueName: \"kubernetes.io/projected/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-kube-api-access-mgvgs\") pod \"redhat-operators-zg4gg\" (UID: \"d6cc3d29-abfa-4a3e-8251-5811e2bab91e\") " pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.683642 4914 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.683681 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.708115 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.742970 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rhxcj\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.788360 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.788711 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgvgs\" (UniqueName: \"kubernetes.io/projected/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-kube-api-access-mgvgs\") pod \"redhat-operators-zg4gg\" (UID: \"d6cc3d29-abfa-4a3e-8251-5811e2bab91e\") " pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.788777 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-utilities\") pod \"redhat-operators-zg4gg\" (UID: \"d6cc3d29-abfa-4a3e-8251-5811e2bab91e\") " pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.788808 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-catalog-content\") pod \"redhat-operators-zg4gg\" (UID: \"d6cc3d29-abfa-4a3e-8251-5811e2bab91e\") " pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.791623 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-utilities\") pod \"redhat-operators-zg4gg\" (UID: \"d6cc3d29-abfa-4a3e-8251-5811e2bab91e\") " pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.800387 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-catalog-content\") pod \"redhat-operators-zg4gg\" (UID: \"d6cc3d29-abfa-4a3e-8251-5811e2bab91e\") " pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.800758 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.849377 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgvgs\" (UniqueName: \"kubernetes.io/projected/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-kube-api-access-mgvgs\") pod \"redhat-operators-zg4gg\" (UID: \"d6cc3d29-abfa-4a3e-8251-5811e2bab91e\") " pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.875032 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.875093 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.875032 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.875165 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.881320 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.895266 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nff6l"] Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.896983 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.918501 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.937021 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lg87" Jan 27 13:46:40 crc kubenswrapper[4914]: W0127 13:46:40.939713 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3080a558_3dff_475d_a18d_c9660c4a1b47.slice/crio-ffd17157eb218560fbd57c70880e45037aec0ece0278d56eef01ea741570142c WatchSource:0}: Error finding container ffd17157eb218560fbd57c70880e45037aec0ece0278d56eef01ea741570142c: Status 404 returned error can't find the container with id ffd17157eb218560fbd57c70880e45037aec0ece0278d56eef01ea741570142c Jan 27 13:46:40 crc kubenswrapper[4914]: I0127 13:46:40.958324 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.071384 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cqtkq"] Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.073201 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.078453 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cqtkq"] Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.132375 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.133506 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.143029 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.143274 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.150180 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-wnsrs" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.158605 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.177586 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-wnsrs" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.226329 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08816fb0-a9de-49ff-a2bf-085c22e28039-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"08816fb0-a9de-49ff-a2bf-085c22e28039\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.226413 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08816fb0-a9de-49ff-a2bf-085c22e28039-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"08816fb0-a9de-49ff-a2bf-085c22e28039\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.226480 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d13ecb-5cee-479b-a638-530382bb5ec6-utilities\") pod \"redhat-operators-cqtkq\" (UID: \"00d13ecb-5cee-479b-a638-530382bb5ec6\") " pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.226514 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d13ecb-5cee-479b-a638-530382bb5ec6-catalog-content\") pod \"redhat-operators-cqtkq\" (UID: \"00d13ecb-5cee-479b-a638-530382bb5ec6\") " pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.226608 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67rrq\" (UniqueName: \"kubernetes.io/projected/00d13ecb-5cee-479b-a638-530382bb5ec6-kube-api-access-67rrq\") pod \"redhat-operators-cqtkq\" (UID: \"00d13ecb-5cee-479b-a638-530382bb5ec6\") " pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.307150 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.317345 4914 patch_prober.go:28] interesting pod/router-default-5444994796-vm26v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 13:46:41 crc kubenswrapper[4914]: [-]has-synced failed: reason withheld Jan 27 13:46:41 crc kubenswrapper[4914]: [+]process-running ok Jan 27 13:46:41 crc kubenswrapper[4914]: healthz check failed Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.317502 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm26v" podUID="6295fb40-33f8-4b77-8c3e-d36037efa07e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.328422 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08816fb0-a9de-49ff-a2bf-085c22e28039-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"08816fb0-a9de-49ff-a2bf-085c22e28039\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.328485 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08816fb0-a9de-49ff-a2bf-085c22e28039-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"08816fb0-a9de-49ff-a2bf-085c22e28039\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.328539 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d13ecb-5cee-479b-a638-530382bb5ec6-utilities\") pod \"redhat-operators-cqtkq\" (UID: \"00d13ecb-5cee-479b-a638-530382bb5ec6\") " pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.328574 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d13ecb-5cee-479b-a638-530382bb5ec6-catalog-content\") pod \"redhat-operators-cqtkq\" (UID: \"00d13ecb-5cee-479b-a638-530382bb5ec6\") " pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.328622 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67rrq\" (UniqueName: \"kubernetes.io/projected/00d13ecb-5cee-479b-a638-530382bb5ec6-kube-api-access-67rrq\") pod \"redhat-operators-cqtkq\" (UID: \"00d13ecb-5cee-479b-a638-530382bb5ec6\") " pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.329178 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08816fb0-a9de-49ff-a2bf-085c22e28039-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"08816fb0-a9de-49ff-a2bf-085c22e28039\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.330320 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d13ecb-5cee-479b-a638-530382bb5ec6-utilities\") pod \"redhat-operators-cqtkq\" (UID: \"00d13ecb-5cee-479b-a638-530382bb5ec6\") " pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.330575 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d13ecb-5cee-479b-a638-530382bb5ec6-catalog-content\") pod \"redhat-operators-cqtkq\" (UID: \"00d13ecb-5cee-479b-a638-530382bb5ec6\") " pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.373053 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67rrq\" (UniqueName: \"kubernetes.io/projected/00d13ecb-5cee-479b-a638-530382bb5ec6-kube-api-access-67rrq\") pod \"redhat-operators-cqtkq\" (UID: \"00d13ecb-5cee-479b-a638-530382bb5ec6\") " pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.375026 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.379740 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08816fb0-a9de-49ff-a2bf-085c22e28039-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"08816fb0-a9de-49ff-a2bf-085c22e28039\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.380142 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rhxcj"] Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.447347 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.477202 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.503348 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zg4gg"] Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.581206 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.605523 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssxkp" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.678435 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jnmjw" Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.800372 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg4gg" event={"ID":"d6cc3d29-abfa-4a3e-8251-5811e2bab91e","Type":"ContainerStarted","Data":"2a15ecc62b88f242b2852343c03490b5e1b22da34f61531b9df27c5ddb3725e1"} Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.806231 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" event={"ID":"df3e61ea-86a0-416e-9e24-d90241f6a543","Type":"ContainerStarted","Data":"f9dc79d613981944c31a9f83d55c964d0c0b549257ce924b5ac9499c7b32d2b1"} Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.807865 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7eb27548-790e-4e9a-b663-9cd584e5e476","Type":"ContainerStarted","Data":"06c1f9a489207b8809da9c1485a07deba0c854d9f0d3f043592280e329ac1093"} Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.810778 4914 generic.go:334] "Generic (PLEG): container finished" podID="3080a558-3dff-475d-a18d-c9660c4a1b47" containerID="45e8ca55a3cb3a9f48a61ca8aeb25aac17f22eb7925fa480e110eb5eebdb8c40" exitCode=0 Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.811818 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nff6l" event={"ID":"3080a558-3dff-475d-a18d-c9660c4a1b47","Type":"ContainerDied","Data":"45e8ca55a3cb3a9f48a61ca8aeb25aac17f22eb7925fa480e110eb5eebdb8c40"} Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.811867 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nff6l" event={"ID":"3080a558-3dff-475d-a18d-c9660c4a1b47","Type":"ContainerStarted","Data":"ffd17157eb218560fbd57c70880e45037aec0ece0278d56eef01ea741570142c"} Jan 27 13:46:41 crc kubenswrapper[4914]: I0127 13:46:41.987021 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cqtkq"] Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.173455 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 13:46:42 crc kubenswrapper[4914]: W0127 13:46:42.202067 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod08816fb0_a9de_49ff_a2bf_085c22e28039.slice/crio-14c2df3dcfe49e1d514a4a0f3389089d013219c943e8f4c7679ccfaece68ece9 WatchSource:0}: Error finding container 14c2df3dcfe49e1d514a4a0f3389089d013219c943e8f4c7679ccfaece68ece9: Status 404 returned error can't find the container with id 14c2df3dcfe49e1d514a4a0f3389089d013219c943e8f4c7679ccfaece68ece9 Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.331113 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.336650 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.353274 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.369231 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-vm26v" Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.470385 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-secret-volume\") pod \"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d\" (UID: \"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d\") " Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.470470 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-config-volume\") pod \"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d\" (UID: \"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d\") " Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.470545 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddj7j\" (UniqueName: \"kubernetes.io/projected/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-kube-api-access-ddj7j\") pod \"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d\" (UID: \"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d\") " Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.472574 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-config-volume" (OuterVolumeSpecName: "config-volume") pod "c42d7bb3-fffc-4dd8-bc41-151b5b2df45d" (UID: "c42d7bb3-fffc-4dd8-bc41-151b5b2df45d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.481990 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-kube-api-access-ddj7j" (OuterVolumeSpecName: "kube-api-access-ddj7j") pod "c42d7bb3-fffc-4dd8-bc41-151b5b2df45d" (UID: "c42d7bb3-fffc-4dd8-bc41-151b5b2df45d"). InnerVolumeSpecName "kube-api-access-ddj7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.482619 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c42d7bb3-fffc-4dd8-bc41-151b5b2df45d" (UID: "c42d7bb3-fffc-4dd8-bc41-151b5b2df45d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.571986 4914 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.572022 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddj7j\" (UniqueName: \"kubernetes.io/projected/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-kube-api-access-ddj7j\") on node \"crc\" DevicePath \"\"" Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.572038 4914 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.826139 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" event={"ID":"df3e61ea-86a0-416e-9e24-d90241f6a543","Type":"ContainerStarted","Data":"eff2c746c1ff87a6b07bbbbebd90376e42cecbe25ac5eb5d5a483cf6d94d0c93"} Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.827091 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.834780 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.834781 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26" event={"ID":"c42d7bb3-fffc-4dd8-bc41-151b5b2df45d","Type":"ContainerDied","Data":"59225a8e287b776521095d0f09271dbde5745465912e884a3d8f01790cbeb1ee"} Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.834882 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59225a8e287b776521095d0f09271dbde5745465912e884a3d8f01790cbeb1ee" Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.837437 4914 generic.go:334] "Generic (PLEG): container finished" podID="00d13ecb-5cee-479b-a638-530382bb5ec6" containerID="b5999d7c6d2c7e91d925c45964035e22859cafea3de5b0f2903ada28fcc8cc29" exitCode=0 Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.837501 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqtkq" event={"ID":"00d13ecb-5cee-479b-a638-530382bb5ec6","Type":"ContainerDied","Data":"b5999d7c6d2c7e91d925c45964035e22859cafea3de5b0f2903ada28fcc8cc29"} Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.837533 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqtkq" event={"ID":"00d13ecb-5cee-479b-a638-530382bb5ec6","Type":"ContainerStarted","Data":"a34d4211c0febaabbbd767973154a17632368c7ba07d84e8d31e05655adb9e01"} Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.853646 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" podStartSLOduration=138.853600547 podStartE2EDuration="2m18.853600547s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:42.852225061 +0000 UTC m=+161.164575166" watchObservedRunningTime="2026-01-27 13:46:42.853600547 +0000 UTC m=+161.165950632" Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.854504 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"08816fb0-a9de-49ff-a2bf-085c22e28039","Type":"ContainerStarted","Data":"14c2df3dcfe49e1d514a4a0f3389089d013219c943e8f4c7679ccfaece68ece9"} Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.869770 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7eb27548-790e-4e9a-b663-9cd584e5e476","Type":"ContainerStarted","Data":"6cc0569bdcf970acc457515135d4bb3e4052bb8d40362f2815d29b6774ee7d1f"} Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.879310 4914 generic.go:334] "Generic (PLEG): container finished" podID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" containerID="92a5484454dbd611740ebe8c370fe95abe7dad83b8c5069c4048147c86990843" exitCode=0 Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.880051 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg4gg" event={"ID":"d6cc3d29-abfa-4a3e-8251-5811e2bab91e","Type":"ContainerDied","Data":"92a5484454dbd611740ebe8c370fe95abe7dad83b8c5069c4048147c86990843"} Jan 27 13:46:42 crc kubenswrapper[4914]: I0127 13:46:42.895095 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.8950773 podStartE2EDuration="2.8950773s" podCreationTimestamp="2026-01-27 13:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:42.884970614 +0000 UTC m=+161.197320709" watchObservedRunningTime="2026-01-27 13:46:42.8950773 +0000 UTC m=+161.207427385" Jan 27 13:46:43 crc kubenswrapper[4914]: I0127 13:46:43.681687 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-fxxfj" Jan 27 13:46:43 crc kubenswrapper[4914]: I0127 13:46:43.889596 4914 generic.go:334] "Generic (PLEG): container finished" podID="7eb27548-790e-4e9a-b663-9cd584e5e476" containerID="6cc0569bdcf970acc457515135d4bb3e4052bb8d40362f2815d29b6774ee7d1f" exitCode=0 Jan 27 13:46:43 crc kubenswrapper[4914]: I0127 13:46:43.889720 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7eb27548-790e-4e9a-b663-9cd584e5e476","Type":"ContainerDied","Data":"6cc0569bdcf970acc457515135d4bb3e4052bb8d40362f2815d29b6774ee7d1f"} Jan 27 13:46:43 crc kubenswrapper[4914]: I0127 13:46:43.893472 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"08816fb0-a9de-49ff-a2bf-085c22e28039","Type":"ContainerStarted","Data":"25c03bcdafe73743fc32fab0fc7a91451b7efc449d89cd0137a235f9cf9734fa"} Jan 27 13:46:43 crc kubenswrapper[4914]: I0127 13:46:43.923682 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.923660178 podStartE2EDuration="2.923660178s" podCreationTimestamp="2026-01-27 13:46:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:43.917042464 +0000 UTC m=+162.229392549" watchObservedRunningTime="2026-01-27 13:46:43.923660178 +0000 UTC m=+162.236010263" Jan 27 13:46:44 crc kubenswrapper[4914]: I0127 13:46:44.907610 4914 generic.go:334] "Generic (PLEG): container finished" podID="08816fb0-a9de-49ff-a2bf-085c22e28039" containerID="25c03bcdafe73743fc32fab0fc7a91451b7efc449d89cd0137a235f9cf9734fa" exitCode=0 Jan 27 13:46:44 crc kubenswrapper[4914]: I0127 13:46:44.907682 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"08816fb0-a9de-49ff-a2bf-085c22e28039","Type":"ContainerDied","Data":"25c03bcdafe73743fc32fab0fc7a91451b7efc449d89cd0137a235f9cf9734fa"} Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.082063 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.196942 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7eb27548-790e-4e9a-b663-9cd584e5e476-kubelet-dir\") pod \"7eb27548-790e-4e9a-b663-9cd584e5e476\" (UID: \"7eb27548-790e-4e9a-b663-9cd584e5e476\") " Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.197016 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7eb27548-790e-4e9a-b663-9cd584e5e476-kube-api-access\") pod \"7eb27548-790e-4e9a-b663-9cd584e5e476\" (UID: \"7eb27548-790e-4e9a-b663-9cd584e5e476\") " Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.197074 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eb27548-790e-4e9a-b663-9cd584e5e476-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7eb27548-790e-4e9a-b663-9cd584e5e476" (UID: "7eb27548-790e-4e9a-b663-9cd584e5e476"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.197354 4914 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7eb27548-790e-4e9a-b663-9cd584e5e476-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.204353 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb27548-790e-4e9a-b663-9cd584e5e476-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7eb27548-790e-4e9a-b663-9cd584e5e476" (UID: "7eb27548-790e-4e9a-b663-9cd584e5e476"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.272544 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.298800 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7eb27548-790e-4e9a-b663-9cd584e5e476-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.400049 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08816fb0-a9de-49ff-a2bf-085c22e28039-kubelet-dir\") pod \"08816fb0-a9de-49ff-a2bf-085c22e28039\" (UID: \"08816fb0-a9de-49ff-a2bf-085c22e28039\") " Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.400117 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08816fb0-a9de-49ff-a2bf-085c22e28039-kube-api-access\") pod \"08816fb0-a9de-49ff-a2bf-085c22e28039\" (UID: \"08816fb0-a9de-49ff-a2bf-085c22e28039\") " Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.401195 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08816fb0-a9de-49ff-a2bf-085c22e28039-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "08816fb0-a9de-49ff-a2bf-085c22e28039" (UID: "08816fb0-a9de-49ff-a2bf-085c22e28039"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.407595 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08816fb0-a9de-49ff-a2bf-085c22e28039-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "08816fb0-a9de-49ff-a2bf-085c22e28039" (UID: "08816fb0-a9de-49ff-a2bf-085c22e28039"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.505560 4914 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08816fb0-a9de-49ff-a2bf-085c22e28039-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.506200 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08816fb0-a9de-49ff-a2bf-085c22e28039-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.932754 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"08816fb0-a9de-49ff-a2bf-085c22e28039","Type":"ContainerDied","Data":"14c2df3dcfe49e1d514a4a0f3389089d013219c943e8f4c7679ccfaece68ece9"} Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.932792 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14c2df3dcfe49e1d514a4a0f3389089d013219c943e8f4c7679ccfaece68ece9" Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.932866 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.946699 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7eb27548-790e-4e9a-b663-9cd584e5e476","Type":"ContainerDied","Data":"06c1f9a489207b8809da9c1485a07deba0c854d9f0d3f043592280e329ac1093"} Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.946740 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06c1f9a489207b8809da9c1485a07deba0c854d9f0d3f043592280e329ac1093" Jan 27 13:46:46 crc kubenswrapper[4914]: I0127 13:46:46.946773 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:46:47 crc kubenswrapper[4914]: I0127 13:46:47.327851 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs\") pod \"network-metrics-daemon-22nld\" (UID: \"72d4d49f-291e-448e-81eb-0895324cd4ae\") " pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:47 crc kubenswrapper[4914]: I0127 13:46:47.343629 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d4d49f-291e-448e-81eb-0895324cd4ae-metrics-certs\") pod \"network-metrics-daemon-22nld\" (UID: \"72d4d49f-291e-448e-81eb-0895324cd4ae\") " pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:47 crc kubenswrapper[4914]: I0127 13:46:47.437668 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-22nld" Jan 27 13:46:48 crc kubenswrapper[4914]: I0127 13:46:48.329090 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-22nld"] Jan 27 13:46:48 crc kubenswrapper[4914]: W0127 13:46:48.392101 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d4d49f_291e_448e_81eb_0895324cd4ae.slice/crio-7b71d3841851522158edd3cae7a4a6a86d911d571c730f5ab2e1f0887dbd7627 WatchSource:0}: Error finding container 7b71d3841851522158edd3cae7a4a6a86d911d571c730f5ab2e1f0887dbd7627: Status 404 returned error can't find the container with id 7b71d3841851522158edd3cae7a4a6a86d911d571c730f5ab2e1f0887dbd7627 Jan 27 13:46:48 crc kubenswrapper[4914]: I0127 13:46:48.971176 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-22nld" event={"ID":"72d4d49f-291e-448e-81eb-0895324cd4ae","Type":"ContainerStarted","Data":"7b71d3841851522158edd3cae7a4a6a86d911d571c730f5ab2e1f0887dbd7627"} Jan 27 13:46:49 crc kubenswrapper[4914]: I0127 13:46:49.979006 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-22nld" event={"ID":"72d4d49f-291e-448e-81eb-0895324cd4ae","Type":"ContainerStarted","Data":"7db2f258f3a368b7fa08c60c405cc9ed622cdb6699470ee1c9142dbe913a8a99"} Jan 27 13:46:49 crc kubenswrapper[4914]: I0127 13:46:49.985701 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:46:50 crc kubenswrapper[4914]: I0127 13:46:50.515545 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:50 crc kubenswrapper[4914]: I0127 13:46:50.520155 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:46:50 crc kubenswrapper[4914]: I0127 13:46:50.866271 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:46:50 crc kubenswrapper[4914]: I0127 13:46:50.866632 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:46:50 crc kubenswrapper[4914]: I0127 13:46:50.867074 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:46:50 crc kubenswrapper[4914]: I0127 13:46:50.867104 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:46:50 crc kubenswrapper[4914]: I0127 13:46:50.986690 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-22nld" event={"ID":"72d4d49f-291e-448e-81eb-0895324cd4ae","Type":"ContainerStarted","Data":"fc8dc80f9c8ea8727c35cd190adc7fb2cdfb3338b16c2f091400f87d5bfef583"} Jan 27 13:46:52 crc kubenswrapper[4914]: I0127 13:46:52.019897 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-22nld" podStartSLOduration=148.019880004 podStartE2EDuration="2m28.019880004s" podCreationTimestamp="2026-01-27 13:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:46:52.016322681 +0000 UTC m=+170.328672796" watchObservedRunningTime="2026-01-27 13:46:52.019880004 +0000 UTC m=+170.332230089" Jan 27 13:46:57 crc kubenswrapper[4914]: I0127 13:46:57.402493 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-f8vp8"] Jan 27 13:46:57 crc kubenswrapper[4914]: I0127 13:46:57.403286 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" podUID="5bc9d257-6992-48cf-963b-42c22a5dd170" containerName="controller-manager" containerID="cri-o://5598772a9005461893aa39d16edf9e3bfd6781330abb14f48ae1ae01e8b2151a" gracePeriod=30 Jan 27 13:46:57 crc kubenswrapper[4914]: I0127 13:46:57.411759 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs"] Jan 27 13:46:57 crc kubenswrapper[4914]: I0127 13:46:57.411991 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" podUID="745ec1ee-15c4-456b-9e1e-9015e27c4845" containerName="route-controller-manager" containerID="cri-o://96d00e119b54d33d24bb92af55242d719bd335e0a6aa36e50021963f67904f51" gracePeriod=30 Jan 27 13:46:58 crc kubenswrapper[4914]: I0127 13:46:58.037253 4914 generic.go:334] "Generic (PLEG): container finished" podID="745ec1ee-15c4-456b-9e1e-9015e27c4845" containerID="96d00e119b54d33d24bb92af55242d719bd335e0a6aa36e50021963f67904f51" exitCode=0 Jan 27 13:46:58 crc kubenswrapper[4914]: I0127 13:46:58.037343 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" event={"ID":"745ec1ee-15c4-456b-9e1e-9015e27c4845","Type":"ContainerDied","Data":"96d00e119b54d33d24bb92af55242d719bd335e0a6aa36e50021963f67904f51"} Jan 27 13:46:58 crc kubenswrapper[4914]: I0127 13:46:58.040317 4914 generic.go:334] "Generic (PLEG): container finished" podID="5bc9d257-6992-48cf-963b-42c22a5dd170" containerID="5598772a9005461893aa39d16edf9e3bfd6781330abb14f48ae1ae01e8b2151a" exitCode=0 Jan 27 13:46:58 crc kubenswrapper[4914]: I0127 13:46:58.040372 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" event={"ID":"5bc9d257-6992-48cf-963b-42c22a5dd170","Type":"ContainerDied","Data":"5598772a9005461893aa39d16edf9e3bfd6781330abb14f48ae1ae01e8b2151a"} Jan 27 13:47:00 crc kubenswrapper[4914]: I0127 13:47:00.536036 4914 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nv9bs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 27 13:47:00 crc kubenswrapper[4914]: I0127 13:47:00.536394 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" podUID="745ec1ee-15c4-456b-9e1e-9015e27c4845" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 27 13:47:00 crc kubenswrapper[4914]: I0127 13:47:00.805403 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:47:00 crc kubenswrapper[4914]: I0127 13:47:00.866082 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:47:00 crc kubenswrapper[4914]: I0127 13:47:00.866144 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:47:00 crc kubenswrapper[4914]: I0127 13:47:00.866157 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:47:00 crc kubenswrapper[4914]: I0127 13:47:00.866202 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:47:00 crc kubenswrapper[4914]: I0127 13:47:00.866255 4914 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-mwzf8" Jan 27 13:47:00 crc kubenswrapper[4914]: I0127 13:47:00.866764 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:47:00 crc kubenswrapper[4914]: I0127 13:47:00.866811 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:47:00 crc kubenswrapper[4914]: I0127 13:47:00.866871 4914 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"be878b3a27d22a0a442960b6d03c38bbfab5d1bb5ceebbc90727f878841cd78d"} pod="openshift-console/downloads-7954f5f757-mwzf8" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 27 13:47:00 crc kubenswrapper[4914]: I0127 13:47:00.866967 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" containerID="cri-o://be878b3a27d22a0a442960b6d03c38bbfab5d1bb5ceebbc90727f878841cd78d" gracePeriod=2 Jan 27 13:47:01 crc kubenswrapper[4914]: I0127 13:47:01.700633 4914 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-f8vp8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 13:47:01 crc kubenswrapper[4914]: I0127 13:47:01.700950 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" podUID="5bc9d257-6992-48cf-963b-42c22a5dd170" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 13:47:02 crc kubenswrapper[4914]: I0127 13:47:02.071350 4914 generic.go:334] "Generic (PLEG): container finished" podID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerID="be878b3a27d22a0a442960b6d03c38bbfab5d1bb5ceebbc90727f878841cd78d" exitCode=0 Jan 27 13:47:02 crc kubenswrapper[4914]: I0127 13:47:02.071396 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mwzf8" event={"ID":"7add3664-f0a1-4575-bc02-ff364cf808b7","Type":"ContainerDied","Data":"be878b3a27d22a0a442960b6d03c38bbfab5d1bb5ceebbc90727f878841cd78d"} Jan 27 13:47:07 crc kubenswrapper[4914]: I0127 13:47:07.690904 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:47:07 crc kubenswrapper[4914]: I0127 13:47:07.691423 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:47:10 crc kubenswrapper[4914]: I0127 13:47:10.537171 4914 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nv9bs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 27 13:47:10 crc kubenswrapper[4914]: I0127 13:47:10.537885 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" podUID="745ec1ee-15c4-456b-9e1e-9015e27c4845" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 27 13:47:10 crc kubenswrapper[4914]: I0127 13:47:10.866638 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:47:10 crc kubenswrapper[4914]: I0127 13:47:10.866705 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:47:11 crc kubenswrapper[4914]: I0127 13:47:11.541695 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7dd65" Jan 27 13:47:11 crc kubenswrapper[4914]: I0127 13:47:11.712683 4914 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-f8vp8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 13:47:11 crc kubenswrapper[4914]: I0127 13:47:11.712767 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" podUID="5bc9d257-6992-48cf-963b-42c22a5dd170" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 13:47:12 crc kubenswrapper[4914]: I0127 13:47:12.583413 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.527320 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 13:47:18 crc kubenswrapper[4914]: E0127 13:47:18.528563 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42d7bb3-fffc-4dd8-bc41-151b5b2df45d" containerName="collect-profiles" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.528654 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42d7bb3-fffc-4dd8-bc41-151b5b2df45d" containerName="collect-profiles" Jan 27 13:47:18 crc kubenswrapper[4914]: E0127 13:47:18.528790 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb27548-790e-4e9a-b663-9cd584e5e476" containerName="pruner" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.528906 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb27548-790e-4e9a-b663-9cd584e5e476" containerName="pruner" Jan 27 13:47:18 crc kubenswrapper[4914]: E0127 13:47:18.528933 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08816fb0-a9de-49ff-a2bf-085c22e28039" containerName="pruner" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.528963 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="08816fb0-a9de-49ff-a2bf-085c22e28039" containerName="pruner" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.529323 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="08816fb0-a9de-49ff-a2bf-085c22e28039" containerName="pruner" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.529357 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb27548-790e-4e9a-b663-9cd584e5e476" containerName="pruner" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.529374 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="c42d7bb3-fffc-4dd8-bc41-151b5b2df45d" containerName="collect-profiles" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.531384 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.537619 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.540511 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.542586 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.641953 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.658510 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/15ed2d8c-e36f-4926-baa8-7251b30d758a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"15ed2d8c-e36f-4926-baa8-7251b30d758a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.658608 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15ed2d8c-e36f-4926-baa8-7251b30d758a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"15ed2d8c-e36f-4926-baa8-7251b30d758a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.677513 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7"] Jan 27 13:47:18 crc kubenswrapper[4914]: E0127 13:47:18.677802 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc9d257-6992-48cf-963b-42c22a5dd170" containerName="controller-manager" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.677819 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc9d257-6992-48cf-963b-42c22a5dd170" containerName="controller-manager" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.677948 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bc9d257-6992-48cf-963b-42c22a5dd170" containerName="controller-manager" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.678366 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.684778 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7"] Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.758987 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-proxy-ca-bundles\") pod \"5bc9d257-6992-48cf-963b-42c22a5dd170\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.759036 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-config\") pod \"5bc9d257-6992-48cf-963b-42c22a5dd170\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.759093 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-client-ca\") pod \"5bc9d257-6992-48cf-963b-42c22a5dd170\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.759148 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfhnc\" (UniqueName: \"kubernetes.io/projected/5bc9d257-6992-48cf-963b-42c22a5dd170-kube-api-access-kfhnc\") pod \"5bc9d257-6992-48cf-963b-42c22a5dd170\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.759176 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bc9d257-6992-48cf-963b-42c22a5dd170-serving-cert\") pod \"5bc9d257-6992-48cf-963b-42c22a5dd170\" (UID: \"5bc9d257-6992-48cf-963b-42c22a5dd170\") " Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.759386 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15ed2d8c-e36f-4926-baa8-7251b30d758a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"15ed2d8c-e36f-4926-baa8-7251b30d758a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.759417 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvbs7\" (UniqueName: \"kubernetes.io/projected/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-kube-api-access-wvbs7\") pod \"controller-manager-bf8d7fdcc-f89r7\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.759435 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-serving-cert\") pod \"controller-manager-bf8d7fdcc-f89r7\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.759455 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-client-ca\") pod \"controller-manager-bf8d7fdcc-f89r7\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.759490 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-config\") pod \"controller-manager-bf8d7fdcc-f89r7\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.759527 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-proxy-ca-bundles\") pod \"controller-manager-bf8d7fdcc-f89r7\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.759550 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/15ed2d8c-e36f-4926-baa8-7251b30d758a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"15ed2d8c-e36f-4926-baa8-7251b30d758a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.759621 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/15ed2d8c-e36f-4926-baa8-7251b30d758a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"15ed2d8c-e36f-4926-baa8-7251b30d758a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.760361 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5bc9d257-6992-48cf-963b-42c22a5dd170" (UID: "5bc9d257-6992-48cf-963b-42c22a5dd170"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.761062 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-config" (OuterVolumeSpecName: "config") pod "5bc9d257-6992-48cf-963b-42c22a5dd170" (UID: "5bc9d257-6992-48cf-963b-42c22a5dd170"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.761290 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-client-ca" (OuterVolumeSpecName: "client-ca") pod "5bc9d257-6992-48cf-963b-42c22a5dd170" (UID: "5bc9d257-6992-48cf-963b-42c22a5dd170"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.775292 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bc9d257-6992-48cf-963b-42c22a5dd170-kube-api-access-kfhnc" (OuterVolumeSpecName: "kube-api-access-kfhnc") pod "5bc9d257-6992-48cf-963b-42c22a5dd170" (UID: "5bc9d257-6992-48cf-963b-42c22a5dd170"). InnerVolumeSpecName "kube-api-access-kfhnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.775483 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc9d257-6992-48cf-963b-42c22a5dd170-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5bc9d257-6992-48cf-963b-42c22a5dd170" (UID: "5bc9d257-6992-48cf-963b-42c22a5dd170"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.781863 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15ed2d8c-e36f-4926-baa8-7251b30d758a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"15ed2d8c-e36f-4926-baa8-7251b30d758a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.858221 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.860759 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-proxy-ca-bundles\") pod \"controller-manager-bf8d7fdcc-f89r7\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.860825 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvbs7\" (UniqueName: \"kubernetes.io/projected/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-kube-api-access-wvbs7\") pod \"controller-manager-bf8d7fdcc-f89r7\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.860908 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-serving-cert\") pod \"controller-manager-bf8d7fdcc-f89r7\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.860939 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-client-ca\") pod \"controller-manager-bf8d7fdcc-f89r7\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.860981 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-config\") pod \"controller-manager-bf8d7fdcc-f89r7\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.861024 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.861037 4914 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.861049 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfhnc\" (UniqueName: \"kubernetes.io/projected/5bc9d257-6992-48cf-963b-42c22a5dd170-kube-api-access-kfhnc\") on node \"crc\" DevicePath \"\"" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.861059 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bc9d257-6992-48cf-963b-42c22a5dd170-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.861071 4914 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bc9d257-6992-48cf-963b-42c22a5dd170-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.862142 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-client-ca\") pod \"controller-manager-bf8d7fdcc-f89r7\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.862331 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-proxy-ca-bundles\") pod \"controller-manager-bf8d7fdcc-f89r7\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.862432 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-config\") pod \"controller-manager-bf8d7fdcc-f89r7\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.866277 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-serving-cert\") pod \"controller-manager-bf8d7fdcc-f89r7\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.880172 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvbs7\" (UniqueName: \"kubernetes.io/projected/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-kube-api-access-wvbs7\") pod \"controller-manager-bf8d7fdcc-f89r7\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:18 crc kubenswrapper[4914]: I0127 13:47:18.996697 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:19 crc kubenswrapper[4914]: I0127 13:47:19.165210 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" event={"ID":"5bc9d257-6992-48cf-963b-42c22a5dd170","Type":"ContainerDied","Data":"24e222b26766c01813eab0ba08d0391cbf866246b25298acdc54dbe6216506a9"} Jan 27 13:47:19 crc kubenswrapper[4914]: I0127 13:47:19.165241 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-f8vp8" Jan 27 13:47:19 crc kubenswrapper[4914]: I0127 13:47:19.165306 4914 scope.go:117] "RemoveContainer" containerID="5598772a9005461893aa39d16edf9e3bfd6781330abb14f48ae1ae01e8b2151a" Jan 27 13:47:19 crc kubenswrapper[4914]: I0127 13:47:19.192744 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-f8vp8"] Jan 27 13:47:19 crc kubenswrapper[4914]: I0127 13:47:19.195421 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-f8vp8"] Jan 27 13:47:20 crc kubenswrapper[4914]: E0127 13:47:20.088271 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 13:47:20 crc kubenswrapper[4914]: E0127 13:47:20.088474 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxgxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nfpmr_openshift-marketplace(a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 13:47:20 crc kubenswrapper[4914]: E0127 13:47:20.089646 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nfpmr" podUID="a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" Jan 27 13:47:20 crc kubenswrapper[4914]: I0127 13:47:20.378606 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bc9d257-6992-48cf-963b-42c22a5dd170" path="/var/lib/kubelet/pods/5bc9d257-6992-48cf-963b-42c22a5dd170/volumes" Jan 27 13:47:20 crc kubenswrapper[4914]: I0127 13:47:20.866691 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:47:20 crc kubenswrapper[4914]: I0127 13:47:20.866757 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:47:21 crc kubenswrapper[4914]: I0127 13:47:21.535895 4914 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nv9bs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 13:47:21 crc kubenswrapper[4914]: I0127 13:47:21.536269 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" podUID="745ec1ee-15c4-456b-9e1e-9015e27c4845" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 13:47:23 crc kubenswrapper[4914]: I0127 13:47:23.320732 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 13:47:23 crc kubenswrapper[4914]: I0127 13:47:23.321702 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:47:23 crc kubenswrapper[4914]: I0127 13:47:23.331170 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 13:47:23 crc kubenswrapper[4914]: I0127 13:47:23.414088 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59c2b004-9b1e-40e8-82dc-5b8361f8627e-kube-api-access\") pod \"installer-9-crc\" (UID: \"59c2b004-9b1e-40e8-82dc-5b8361f8627e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:47:23 crc kubenswrapper[4914]: I0127 13:47:23.414170 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59c2b004-9b1e-40e8-82dc-5b8361f8627e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"59c2b004-9b1e-40e8-82dc-5b8361f8627e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:47:23 crc kubenswrapper[4914]: I0127 13:47:23.414210 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/59c2b004-9b1e-40e8-82dc-5b8361f8627e-var-lock\") pod \"installer-9-crc\" (UID: \"59c2b004-9b1e-40e8-82dc-5b8361f8627e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:47:23 crc kubenswrapper[4914]: I0127 13:47:23.515548 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59c2b004-9b1e-40e8-82dc-5b8361f8627e-kube-api-access\") pod \"installer-9-crc\" (UID: \"59c2b004-9b1e-40e8-82dc-5b8361f8627e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:47:23 crc kubenswrapper[4914]: I0127 13:47:23.515610 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59c2b004-9b1e-40e8-82dc-5b8361f8627e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"59c2b004-9b1e-40e8-82dc-5b8361f8627e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:47:23 crc kubenswrapper[4914]: I0127 13:47:23.515634 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/59c2b004-9b1e-40e8-82dc-5b8361f8627e-var-lock\") pod \"installer-9-crc\" (UID: \"59c2b004-9b1e-40e8-82dc-5b8361f8627e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:47:23 crc kubenswrapper[4914]: I0127 13:47:23.515722 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/59c2b004-9b1e-40e8-82dc-5b8361f8627e-var-lock\") pod \"installer-9-crc\" (UID: \"59c2b004-9b1e-40e8-82dc-5b8361f8627e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:47:23 crc kubenswrapper[4914]: I0127 13:47:23.515907 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59c2b004-9b1e-40e8-82dc-5b8361f8627e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"59c2b004-9b1e-40e8-82dc-5b8361f8627e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:47:23 crc kubenswrapper[4914]: I0127 13:47:23.538056 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59c2b004-9b1e-40e8-82dc-5b8361f8627e-kube-api-access\") pod \"installer-9-crc\" (UID: \"59c2b004-9b1e-40e8-82dc-5b8361f8627e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:47:23 crc kubenswrapper[4914]: I0127 13:47:23.709788 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:47:25 crc kubenswrapper[4914]: E0127 13:47:25.442784 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-nfpmr" podUID="a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" Jan 27 13:47:27 crc kubenswrapper[4914]: E0127 13:47:27.815624 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 13:47:27 crc kubenswrapper[4914]: E0127 13:47:27.816877 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g25c2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-lgzjm_openshift-marketplace(02984395-bee4-40bd-98ab-2bf03009bb9f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 13:47:27 crc kubenswrapper[4914]: E0127 13:47:27.818121 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-lgzjm" podUID="02984395-bee4-40bd-98ab-2bf03009bb9f" Jan 27 13:47:30 crc kubenswrapper[4914]: I0127 13:47:30.867434 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:47:30 crc kubenswrapper[4914]: I0127 13:47:30.867503 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:47:31 crc kubenswrapper[4914]: I0127 13:47:31.549686 4914 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nv9bs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 13:47:31 crc kubenswrapper[4914]: I0127 13:47:31.549747 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" podUID="745ec1ee-15c4-456b-9e1e-9015e27c4845" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 13:47:36 crc kubenswrapper[4914]: E0127 13:47:36.377392 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-lgzjm" podUID="02984395-bee4-40bd-98ab-2bf03009bb9f" Jan 27 13:47:36 crc kubenswrapper[4914]: E0127 13:47:36.440939 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 13:47:36 crc kubenswrapper[4914]: E0127 13:47:36.441113 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6skk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nff6l_openshift-marketplace(3080a558-3dff-475d-a18d-c9660c4a1b47): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 13:47:36 crc kubenswrapper[4914]: E0127 13:47:36.442508 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nff6l" podUID="3080a558-3dff-475d-a18d-c9660c4a1b47" Jan 27 13:47:37 crc kubenswrapper[4914]: I0127 13:47:37.691140 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:47:37 crc kubenswrapper[4914]: I0127 13:47:37.691592 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:47:37 crc kubenswrapper[4914]: I0127 13:47:37.691653 4914 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:47:37 crc kubenswrapper[4914]: I0127 13:47:37.692504 4914 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf"} pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:47:37 crc kubenswrapper[4914]: I0127 13:47:37.692668 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" containerID="cri-o://8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf" gracePeriod=600 Jan 27 13:47:39 crc kubenswrapper[4914]: E0127 13:47:39.994311 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 13:47:39 crc kubenswrapper[4914]: E0127 13:47:39.994774 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67rrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cqtkq_openshift-marketplace(00d13ecb-5cee-479b-a638-530382bb5ec6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 13:47:39 crc kubenswrapper[4914]: E0127 13:47:39.996030 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-cqtkq" podUID="00d13ecb-5cee-479b-a638-530382bb5ec6" Jan 27 13:47:40 crc kubenswrapper[4914]: I0127 13:47:40.866104 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:47:40 crc kubenswrapper[4914]: I0127 13:47:40.866171 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:47:41 crc kubenswrapper[4914]: I0127 13:47:41.288004 4914 generic.go:334] "Generic (PLEG): container finished" podID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerID="8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf" exitCode=0 Jan 27 13:47:41 crc kubenswrapper[4914]: I0127 13:47:41.288058 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerDied","Data":"8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf"} Jan 27 13:47:41 crc kubenswrapper[4914]: I0127 13:47:41.537689 4914 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nv9bs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 13:47:41 crc kubenswrapper[4914]: I0127 13:47:41.537762 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" podUID="745ec1ee-15c4-456b-9e1e-9015e27c4845" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 13:47:50 crc kubenswrapper[4914]: I0127 13:47:50.866207 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:47:50 crc kubenswrapper[4914]: I0127 13:47:50.866756 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:47:51 crc kubenswrapper[4914]: I0127 13:47:51.536626 4914 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nv9bs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 13:47:51 crc kubenswrapper[4914]: I0127 13:47:51.536689 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" podUID="745ec1ee-15c4-456b-9e1e-9015e27c4845" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 13:47:57 crc kubenswrapper[4914]: E0127 13:47:57.038191 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 13:47:57 crc kubenswrapper[4914]: E0127 13:47:57.038191 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 13:47:57 crc kubenswrapper[4914]: E0127 13:47:57.038585 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mgvgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zg4gg_openshift-marketplace(d6cc3d29-abfa-4a3e-8251-5811e2bab91e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 13:47:57 crc kubenswrapper[4914]: E0127 13:47:57.038745 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxd8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-r4lbv_openshift-marketplace(f1b4a22a-26ec-4e2f-9a83-d0532ff4905d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 13:47:57 crc kubenswrapper[4914]: E0127 13:47:57.039797 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zg4gg" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" Jan 27 13:47:57 crc kubenswrapper[4914]: E0127 13:47:57.039824 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-r4lbv" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" Jan 27 13:47:57 crc kubenswrapper[4914]: E0127 13:47:57.051604 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 13:47:57 crc kubenswrapper[4914]: E0127 13:47:57.051746 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qbl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8q7ck_openshift-marketplace(1f2a8a9b-5334-4de2-9198-7677a52f8002): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 13:47:57 crc kubenswrapper[4914]: E0127 13:47:57.053045 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8q7ck" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" Jan 27 13:47:57 crc kubenswrapper[4914]: E0127 13:47:57.465442 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zg4gg" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" Jan 27 13:47:57 crc kubenswrapper[4914]: E0127 13:47:57.465455 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r4lbv" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" Jan 27 13:47:57 crc kubenswrapper[4914]: E0127 13:47:57.465557 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8q7ck" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.509279 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.535184 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd"] Jan 27 13:47:57 crc kubenswrapper[4914]: E0127 13:47:57.535459 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="745ec1ee-15c4-456b-9e1e-9015e27c4845" containerName="route-controller-manager" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.535474 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="745ec1ee-15c4-456b-9e1e-9015e27c4845" containerName="route-controller-manager" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.535601 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="745ec1ee-15c4-456b-9e1e-9015e27c4845" containerName="route-controller-manager" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.536157 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.542613 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd"] Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.608752 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/745ec1ee-15c4-456b-9e1e-9015e27c4845-config\") pod \"745ec1ee-15c4-456b-9e1e-9015e27c4845\" (UID: \"745ec1ee-15c4-456b-9e1e-9015e27c4845\") " Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.608804 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/745ec1ee-15c4-456b-9e1e-9015e27c4845-serving-cert\") pod \"745ec1ee-15c4-456b-9e1e-9015e27c4845\" (UID: \"745ec1ee-15c4-456b-9e1e-9015e27c4845\") " Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.608874 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/745ec1ee-15c4-456b-9e1e-9015e27c4845-client-ca\") pod \"745ec1ee-15c4-456b-9e1e-9015e27c4845\" (UID: \"745ec1ee-15c4-456b-9e1e-9015e27c4845\") " Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.609005 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6ff4\" (UniqueName: \"kubernetes.io/projected/745ec1ee-15c4-456b-9e1e-9015e27c4845-kube-api-access-p6ff4\") pod \"745ec1ee-15c4-456b-9e1e-9015e27c4845\" (UID: \"745ec1ee-15c4-456b-9e1e-9015e27c4845\") " Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.609192 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40c6c52c-17b5-4a94-b18f-6ce669d8409d-client-ca\") pod \"route-controller-manager-57496df798-4xbvd\" (UID: \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\") " pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.609258 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ljt6\" (UniqueName: \"kubernetes.io/projected/40c6c52c-17b5-4a94-b18f-6ce669d8409d-kube-api-access-9ljt6\") pod \"route-controller-manager-57496df798-4xbvd\" (UID: \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\") " pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.609313 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40c6c52c-17b5-4a94-b18f-6ce669d8409d-config\") pod \"route-controller-manager-57496df798-4xbvd\" (UID: \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\") " pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.609359 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40c6c52c-17b5-4a94-b18f-6ce669d8409d-serving-cert\") pod \"route-controller-manager-57496df798-4xbvd\" (UID: \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\") " pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.609595 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/745ec1ee-15c4-456b-9e1e-9015e27c4845-config" (OuterVolumeSpecName: "config") pod "745ec1ee-15c4-456b-9e1e-9015e27c4845" (UID: "745ec1ee-15c4-456b-9e1e-9015e27c4845"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.610004 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/745ec1ee-15c4-456b-9e1e-9015e27c4845-client-ca" (OuterVolumeSpecName: "client-ca") pod "745ec1ee-15c4-456b-9e1e-9015e27c4845" (UID: "745ec1ee-15c4-456b-9e1e-9015e27c4845"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.614228 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/745ec1ee-15c4-456b-9e1e-9015e27c4845-kube-api-access-p6ff4" (OuterVolumeSpecName: "kube-api-access-p6ff4") pod "745ec1ee-15c4-456b-9e1e-9015e27c4845" (UID: "745ec1ee-15c4-456b-9e1e-9015e27c4845"). InnerVolumeSpecName "kube-api-access-p6ff4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.614401 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/745ec1ee-15c4-456b-9e1e-9015e27c4845-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "745ec1ee-15c4-456b-9e1e-9015e27c4845" (UID: "745ec1ee-15c4-456b-9e1e-9015e27c4845"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.710802 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40c6c52c-17b5-4a94-b18f-6ce669d8409d-client-ca\") pod \"route-controller-manager-57496df798-4xbvd\" (UID: \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\") " pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.710916 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ljt6\" (UniqueName: \"kubernetes.io/projected/40c6c52c-17b5-4a94-b18f-6ce669d8409d-kube-api-access-9ljt6\") pod \"route-controller-manager-57496df798-4xbvd\" (UID: \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\") " pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.710957 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40c6c52c-17b5-4a94-b18f-6ce669d8409d-config\") pod \"route-controller-manager-57496df798-4xbvd\" (UID: \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\") " pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.711005 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40c6c52c-17b5-4a94-b18f-6ce669d8409d-serving-cert\") pod \"route-controller-manager-57496df798-4xbvd\" (UID: \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\") " pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.711063 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6ff4\" (UniqueName: \"kubernetes.io/projected/745ec1ee-15c4-456b-9e1e-9015e27c4845-kube-api-access-p6ff4\") on node \"crc\" DevicePath \"\"" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.711077 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/745ec1ee-15c4-456b-9e1e-9015e27c4845-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.711087 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/745ec1ee-15c4-456b-9e1e-9015e27c4845-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.711097 4914 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/745ec1ee-15c4-456b-9e1e-9015e27c4845-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.712020 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40c6c52c-17b5-4a94-b18f-6ce669d8409d-client-ca\") pod \"route-controller-manager-57496df798-4xbvd\" (UID: \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\") " pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.712401 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40c6c52c-17b5-4a94-b18f-6ce669d8409d-config\") pod \"route-controller-manager-57496df798-4xbvd\" (UID: \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\") " pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.714933 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40c6c52c-17b5-4a94-b18f-6ce669d8409d-serving-cert\") pod \"route-controller-manager-57496df798-4xbvd\" (UID: \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\") " pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.726408 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ljt6\" (UniqueName: \"kubernetes.io/projected/40c6c52c-17b5-4a94-b18f-6ce669d8409d-kube-api-access-9ljt6\") pod \"route-controller-manager-57496df798-4xbvd\" (UID: \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\") " pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:47:57 crc kubenswrapper[4914]: I0127 13:47:57.856798 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:47:58 crc kubenswrapper[4914]: I0127 13:47:58.368903 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" event={"ID":"745ec1ee-15c4-456b-9e1e-9015e27c4845","Type":"ContainerDied","Data":"8fc5cdd90f9ef1120b1353374f41ee906fdc3382292332fc368cb6cb070030fa"} Jan 27 13:47:58 crc kubenswrapper[4914]: I0127 13:47:58.368978 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs" Jan 27 13:47:58 crc kubenswrapper[4914]: I0127 13:47:58.387319 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs"] Jan 27 13:47:58 crc kubenswrapper[4914]: I0127 13:47:58.390515 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nv9bs"] Jan 27 13:47:58 crc kubenswrapper[4914]: E0127 13:47:58.454219 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 13:47:58 crc kubenswrapper[4914]: E0127 13:47:58.454385 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6frj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-pkm2z_openshift-marketplace(dc6a9b51-d0a6-4370-94bd-342dcfa54a99): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 13:47:58 crc kubenswrapper[4914]: E0127 13:47:58.455568 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-pkm2z" podUID="dc6a9b51-d0a6-4370-94bd-342dcfa54a99" Jan 27 13:47:58 crc kubenswrapper[4914]: I0127 13:47:58.491545 4914 scope.go:117] "RemoveContainer" containerID="96d00e119b54d33d24bb92af55242d719bd335e0a6aa36e50021963f67904f51" Jan 27 13:47:58 crc kubenswrapper[4914]: I0127 13:47:58.721943 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 13:47:58 crc kubenswrapper[4914]: I0127 13:47:58.758866 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7"] Jan 27 13:47:58 crc kubenswrapper[4914]: W0127 13:47:58.817542 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51d3fdf2_9bd8_4eac_86ef_51d793f17e03.slice/crio-1703ff6367a3c31122ac72ba11cc8ff9210e6b5fbf12dc977075fcc00866425b WatchSource:0}: Error finding container 1703ff6367a3c31122ac72ba11cc8ff9210e6b5fbf12dc977075fcc00866425b: Status 404 returned error can't find the container with id 1703ff6367a3c31122ac72ba11cc8ff9210e6b5fbf12dc977075fcc00866425b Jan 27 13:47:58 crc kubenswrapper[4914]: I0127 13:47:58.914526 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd"] Jan 27 13:47:58 crc kubenswrapper[4914]: W0127 13:47:58.926416 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40c6c52c_17b5_4a94_b18f_6ce669d8409d.slice/crio-cb3fe6001f89d5d2cb54e31cc99dc6efb4ffbdd78052a4babb245fed2a9908d9 WatchSource:0}: Error finding container cb3fe6001f89d5d2cb54e31cc99dc6efb4ffbdd78052a4babb245fed2a9908d9: Status 404 returned error can't find the container with id cb3fe6001f89d5d2cb54e31cc99dc6efb4ffbdd78052a4babb245fed2a9908d9 Jan 27 13:47:58 crc kubenswrapper[4914]: I0127 13:47:58.999873 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 13:47:59 crc kubenswrapper[4914]: W0127 13:47:59.011909 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod15ed2d8c_e36f_4926_baa8_7251b30d758a.slice/crio-1d82c8413a5447e46c161bff401350f92ce013c9a33078346fe32cc116f3fcff WatchSource:0}: Error finding container 1d82c8413a5447e46c161bff401350f92ce013c9a33078346fe32cc116f3fcff: Status 404 returned error can't find the container with id 1d82c8413a5447e46c161bff401350f92ce013c9a33078346fe32cc116f3fcff Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.389488 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nff6l" event={"ID":"3080a558-3dff-475d-a18d-c9660c4a1b47","Type":"ContainerStarted","Data":"ae9aae64c3c5949b3e316a0919d825e3da056ade6b5941382be6cc7d23877cb3"} Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.392185 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" event={"ID":"51d3fdf2-9bd8-4eac-86ef-51d793f17e03","Type":"ContainerStarted","Data":"bef27b7e78605bfcf9b7199a842427dc3bfb1f5e9ae36995d34ee28992365e94"} Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.392220 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" event={"ID":"51d3fdf2-9bd8-4eac-86ef-51d793f17e03","Type":"ContainerStarted","Data":"1703ff6367a3c31122ac72ba11cc8ff9210e6b5fbf12dc977075fcc00866425b"} Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.393048 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.396034 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" event={"ID":"40c6c52c-17b5-4a94-b18f-6ce669d8409d","Type":"ContainerStarted","Data":"212ba8facc060da58b015b9d43487970530f24e472c0b124f2b408031716990f"} Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.396068 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" event={"ID":"40c6c52c-17b5-4a94-b18f-6ce669d8409d","Type":"ContainerStarted","Data":"cb3fe6001f89d5d2cb54e31cc99dc6efb4ffbdd78052a4babb245fed2a9908d9"} Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.396710 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.398141 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.403116 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mwzf8" event={"ID":"7add3664-f0a1-4575-bc02-ff364cf808b7","Type":"ContainerStarted","Data":"1904db7e5dd34e06c69e688de811ffc568423546a35bc6ca90b9e89e5495e4ad"} Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.403607 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mwzf8" Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.403652 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.403694 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.415016 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"15ed2d8c-e36f-4926-baa8-7251b30d758a","Type":"ContainerStarted","Data":"1d82c8413a5447e46c161bff401350f92ce013c9a33078346fe32cc116f3fcff"} Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.421937 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerStarted","Data":"95915b10dd9749a36c926b1d56b1160495b6cbef34f668e33fde194019445d27"} Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.436539 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqtkq" event={"ID":"00d13ecb-5cee-479b-a638-530382bb5ec6","Type":"ContainerStarted","Data":"ac6fa26317b525bdb0a1e807a63e5d7eb877230e8ef81323aa1696d8521c574e"} Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.444110 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgzjm" event={"ID":"02984395-bee4-40bd-98ab-2bf03009bb9f","Type":"ContainerStarted","Data":"b7fe8e702dbb5ec62c312b5eb8254b98bc91652774696ab97d0573b0392d1faf"} Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.448992 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfpmr" event={"ID":"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab","Type":"ContainerStarted","Data":"8d0ad1a1be03af1c2d14eb84bfd0f7e7de3b496018888e7702b9a1b583441f3e"} Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.462252 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"59c2b004-9b1e-40e8-82dc-5b8361f8627e","Type":"ContainerStarted","Data":"13b9a3dd8837d07a3a149b9b782edca6306d2752ee4e1c8196a3794907816574"} Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.462286 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"59c2b004-9b1e-40e8-82dc-5b8361f8627e","Type":"ContainerStarted","Data":"1d012b75e57a4055fa16e177abb53aa0fae9dd6e5748642d418a7f07abecbe2b"} Jan 27 13:47:59 crc kubenswrapper[4914]: E0127 13:47:59.463474 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-pkm2z" podUID="dc6a9b51-d0a6-4370-94bd-342dcfa54a99" Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.487701 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" podStartSLOduration=42.487678925 podStartE2EDuration="42.487678925s" podCreationTimestamp="2026-01-27 13:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:47:59.484218672 +0000 UTC m=+237.796568757" watchObservedRunningTime="2026-01-27 13:47:59.487678925 +0000 UTC m=+237.800029010" Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.512968 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" podStartSLOduration=42.51295419 podStartE2EDuration="42.51295419s" podCreationTimestamp="2026-01-27 13:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:47:59.512118128 +0000 UTC m=+237.824468213" watchObservedRunningTime="2026-01-27 13:47:59.51295419 +0000 UTC m=+237.825304275" Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.553326 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=36.553290914 podStartE2EDuration="36.553290914s" podCreationTimestamp="2026-01-27 13:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:47:59.551160437 +0000 UTC m=+237.863510522" watchObservedRunningTime="2026-01-27 13:47:59.553290914 +0000 UTC m=+237.865641009" Jan 27 13:47:59 crc kubenswrapper[4914]: I0127 13:47:59.760529 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.307399 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="745ec1ee-15c4-456b-9e1e-9015e27c4845" path="/var/lib/kubelet/pods/745ec1ee-15c4-456b-9e1e-9015e27c4845/volumes" Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.468510 4914 generic.go:334] "Generic (PLEG): container finished" podID="00d13ecb-5cee-479b-a638-530382bb5ec6" containerID="ac6fa26317b525bdb0a1e807a63e5d7eb877230e8ef81323aa1696d8521c574e" exitCode=0 Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.468563 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqtkq" event={"ID":"00d13ecb-5cee-479b-a638-530382bb5ec6","Type":"ContainerDied","Data":"ac6fa26317b525bdb0a1e807a63e5d7eb877230e8ef81323aa1696d8521c574e"} Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.472382 4914 generic.go:334] "Generic (PLEG): container finished" podID="02984395-bee4-40bd-98ab-2bf03009bb9f" containerID="b7fe8e702dbb5ec62c312b5eb8254b98bc91652774696ab97d0573b0392d1faf" exitCode=0 Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.472461 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgzjm" event={"ID":"02984395-bee4-40bd-98ab-2bf03009bb9f","Type":"ContainerDied","Data":"b7fe8e702dbb5ec62c312b5eb8254b98bc91652774696ab97d0573b0392d1faf"} Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.476197 4914 generic.go:334] "Generic (PLEG): container finished" podID="3080a558-3dff-475d-a18d-c9660c4a1b47" containerID="ae9aae64c3c5949b3e316a0919d825e3da056ade6b5941382be6cc7d23877cb3" exitCode=0 Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.476315 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nff6l" event={"ID":"3080a558-3dff-475d-a18d-c9660c4a1b47","Type":"ContainerDied","Data":"ae9aae64c3c5949b3e316a0919d825e3da056ade6b5941382be6cc7d23877cb3"} Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.479877 4914 generic.go:334] "Generic (PLEG): container finished" podID="15ed2d8c-e36f-4926-baa8-7251b30d758a" containerID="e96f12c0ce6ad7fc80615a8ffe3d113f05807b92620b0c6a4a528ce440de2c0e" exitCode=0 Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.480000 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"15ed2d8c-e36f-4926-baa8-7251b30d758a","Type":"ContainerDied","Data":"e96f12c0ce6ad7fc80615a8ffe3d113f05807b92620b0c6a4a528ce440de2c0e"} Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.491208 4914 generic.go:334] "Generic (PLEG): container finished" podID="a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" containerID="8d0ad1a1be03af1c2d14eb84bfd0f7e7de3b496018888e7702b9a1b583441f3e" exitCode=0 Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.491725 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfpmr" event={"ID":"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab","Type":"ContainerDied","Data":"8d0ad1a1be03af1c2d14eb84bfd0f7e7de3b496018888e7702b9a1b583441f3e"} Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.491764 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfpmr" event={"ID":"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab","Type":"ContainerStarted","Data":"7a591c9c0f61df1372063c828e5123d253b4baa01e78bd844bedd97e5156edf4"} Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.492765 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.492800 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.557228 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nfpmr" podStartSLOduration=2.230245785 podStartE2EDuration="1m21.55721295s" podCreationTimestamp="2026-01-27 13:46:39 +0000 UTC" firstStartedPulling="2026-01-27 13:46:40.659082771 +0000 UTC m=+158.971432856" lastFinishedPulling="2026-01-27 13:47:59.986049936 +0000 UTC m=+238.298400021" observedRunningTime="2026-01-27 13:48:00.555060652 +0000 UTC m=+238.867410737" watchObservedRunningTime="2026-01-27 13:48:00.55721295 +0000 UTC m=+238.869563035" Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.866253 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.866600 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.866329 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:48:00 crc kubenswrapper[4914]: I0127 13:48:00.866679 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:48:01 crc kubenswrapper[4914]: I0127 13:48:01.498972 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqtkq" event={"ID":"00d13ecb-5cee-479b-a638-530382bb5ec6","Type":"ContainerStarted","Data":"7417b9c6418661ec2a367ad8b5882abe13d5c3a8749f8dd720fbcf1be4308012"} Jan 27 13:48:01 crc kubenswrapper[4914]: I0127 13:48:01.501177 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgzjm" event={"ID":"02984395-bee4-40bd-98ab-2bf03009bb9f","Type":"ContainerStarted","Data":"0ba317a053426c16839cb80378793b21d7e2bf157b3a26eefa3216d7960fdbd5"} Jan 27 13:48:01 crc kubenswrapper[4914]: I0127 13:48:01.503689 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nff6l" event={"ID":"3080a558-3dff-475d-a18d-c9660c4a1b47","Type":"ContainerStarted","Data":"993d8c701ab6304cc0402a2afba56c20d310902f14aedfed44d04ebe3f59677d"} Jan 27 13:48:01 crc kubenswrapper[4914]: I0127 13:48:01.504412 4914 patch_prober.go:28] interesting pod/downloads-7954f5f757-mwzf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 13:48:01 crc kubenswrapper[4914]: I0127 13:48:01.504472 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mwzf8" podUID="7add3664-f0a1-4575-bc02-ff364cf808b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 13:48:01 crc kubenswrapper[4914]: I0127 13:48:01.894216 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:48:01 crc kubenswrapper[4914]: I0127 13:48:01.967026 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15ed2d8c-e36f-4926-baa8-7251b30d758a-kube-api-access\") pod \"15ed2d8c-e36f-4926-baa8-7251b30d758a\" (UID: \"15ed2d8c-e36f-4926-baa8-7251b30d758a\") " Jan 27 13:48:01 crc kubenswrapper[4914]: I0127 13:48:01.967219 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/15ed2d8c-e36f-4926-baa8-7251b30d758a-kubelet-dir\") pod \"15ed2d8c-e36f-4926-baa8-7251b30d758a\" (UID: \"15ed2d8c-e36f-4926-baa8-7251b30d758a\") " Jan 27 13:48:01 crc kubenswrapper[4914]: I0127 13:48:01.967422 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15ed2d8c-e36f-4926-baa8-7251b30d758a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "15ed2d8c-e36f-4926-baa8-7251b30d758a" (UID: "15ed2d8c-e36f-4926-baa8-7251b30d758a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:48:01 crc kubenswrapper[4914]: I0127 13:48:01.967653 4914 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/15ed2d8c-e36f-4926-baa8-7251b30d758a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:01 crc kubenswrapper[4914]: I0127 13:48:01.975942 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15ed2d8c-e36f-4926-baa8-7251b30d758a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "15ed2d8c-e36f-4926-baa8-7251b30d758a" (UID: "15ed2d8c-e36f-4926-baa8-7251b30d758a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:48:02 crc kubenswrapper[4914]: I0127 13:48:02.068839 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15ed2d8c-e36f-4926-baa8-7251b30d758a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:02 crc kubenswrapper[4914]: I0127 13:48:02.512956 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"15ed2d8c-e36f-4926-baa8-7251b30d758a","Type":"ContainerDied","Data":"1d82c8413a5447e46c161bff401350f92ce013c9a33078346fe32cc116f3fcff"} Jan 27 13:48:02 crc kubenswrapper[4914]: I0127 13:48:02.513195 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d82c8413a5447e46c161bff401350f92ce013c9a33078346fe32cc116f3fcff" Jan 27 13:48:02 crc kubenswrapper[4914]: I0127 13:48:02.513294 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:48:02 crc kubenswrapper[4914]: I0127 13:48:02.534894 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lgzjm" podStartSLOduration=5.166674388 podStartE2EDuration="1m25.534873714s" podCreationTimestamp="2026-01-27 13:46:37 +0000 UTC" firstStartedPulling="2026-01-27 13:46:40.610569032 +0000 UTC m=+158.922919117" lastFinishedPulling="2026-01-27 13:48:00.978768358 +0000 UTC m=+239.291118443" observedRunningTime="2026-01-27 13:48:02.532041087 +0000 UTC m=+240.844391172" watchObservedRunningTime="2026-01-27 13:48:02.534873714 +0000 UTC m=+240.847223799" Jan 27 13:48:02 crc kubenswrapper[4914]: I0127 13:48:02.555408 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nff6l" podStartSLOduration=3.342183478 podStartE2EDuration="1m22.55538906s" podCreationTimestamp="2026-01-27 13:46:40 +0000 UTC" firstStartedPulling="2026-01-27 13:46:41.813714971 +0000 UTC m=+160.126065056" lastFinishedPulling="2026-01-27 13:48:01.026920553 +0000 UTC m=+239.339270638" observedRunningTime="2026-01-27 13:48:02.554500646 +0000 UTC m=+240.866850801" watchObservedRunningTime="2026-01-27 13:48:02.55538906 +0000 UTC m=+240.867739145" Jan 27 13:48:02 crc kubenswrapper[4914]: I0127 13:48:02.580225 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cqtkq" podStartSLOduration=3.33654927 podStartE2EDuration="1m21.580194613s" podCreationTimestamp="2026-01-27 13:46:41 +0000 UTC" firstStartedPulling="2026-01-27 13:46:42.84004454 +0000 UTC m=+161.152394625" lastFinishedPulling="2026-01-27 13:48:01.083689873 +0000 UTC m=+239.396039968" observedRunningTime="2026-01-27 13:48:02.57865687 +0000 UTC m=+240.891006955" watchObservedRunningTime="2026-01-27 13:48:02.580194613 +0000 UTC m=+240.892544698" Jan 27 13:48:08 crc kubenswrapper[4914]: I0127 13:48:08.846076 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:48:08 crc kubenswrapper[4914]: I0127 13:48:08.846635 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:48:09 crc kubenswrapper[4914]: I0127 13:48:09.535015 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:48:09 crc kubenswrapper[4914]: I0127 13:48:09.601702 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:48:10 crc kubenswrapper[4914]: I0127 13:48:10.024041 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:48:10 crc kubenswrapper[4914]: I0127 13:48:10.025375 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:48:10 crc kubenswrapper[4914]: I0127 13:48:10.068978 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:48:10 crc kubenswrapper[4914]: I0127 13:48:10.368107 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:48:10 crc kubenswrapper[4914]: I0127 13:48:10.368455 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:48:10 crc kubenswrapper[4914]: I0127 13:48:10.405019 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:48:10 crc kubenswrapper[4914]: I0127 13:48:10.601549 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:48:10 crc kubenswrapper[4914]: I0127 13:48:10.605547 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:48:10 crc kubenswrapper[4914]: I0127 13:48:10.885647 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-mwzf8" Jan 27 13:48:11 crc kubenswrapper[4914]: I0127 13:48:11.449475 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:48:11 crc kubenswrapper[4914]: I0127 13:48:11.449520 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:48:11 crc kubenswrapper[4914]: I0127 13:48:11.488656 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:48:11 crc kubenswrapper[4914]: I0127 13:48:11.623279 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:48:12 crc kubenswrapper[4914]: I0127 13:48:12.397240 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nff6l"] Jan 27 13:48:12 crc kubenswrapper[4914]: I0127 13:48:12.575413 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nff6l" podUID="3080a558-3dff-475d-a18d-c9660c4a1b47" containerName="registry-server" containerID="cri-o://993d8c701ab6304cc0402a2afba56c20d310902f14aedfed44d04ebe3f59677d" gracePeriod=2 Jan 27 13:48:14 crc kubenswrapper[4914]: I0127 13:48:14.586396 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nff6l" event={"ID":"3080a558-3dff-475d-a18d-c9660c4a1b47","Type":"ContainerDied","Data":"993d8c701ab6304cc0402a2afba56c20d310902f14aedfed44d04ebe3f59677d"} Jan 27 13:48:14 crc kubenswrapper[4914]: I0127 13:48:14.586428 4914 generic.go:334] "Generic (PLEG): container finished" podID="3080a558-3dff-475d-a18d-c9660c4a1b47" containerID="993d8c701ab6304cc0402a2afba56c20d310902f14aedfed44d04ebe3f59677d" exitCode=0 Jan 27 13:48:14 crc kubenswrapper[4914]: I0127 13:48:14.996198 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cqtkq"] Jan 27 13:48:14 crc kubenswrapper[4914]: I0127 13:48:14.996399 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cqtkq" podUID="00d13ecb-5cee-479b-a638-530382bb5ec6" containerName="registry-server" containerID="cri-o://7417b9c6418661ec2a367ad8b5882abe13d5c3a8749f8dd720fbcf1be4308012" gracePeriod=2 Jan 27 13:48:15 crc kubenswrapper[4914]: I0127 13:48:15.849532 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.048286 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3080a558-3dff-475d-a18d-c9660c4a1b47-utilities\") pod \"3080a558-3dff-475d-a18d-c9660c4a1b47\" (UID: \"3080a558-3dff-475d-a18d-c9660c4a1b47\") " Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.048390 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6skk\" (UniqueName: \"kubernetes.io/projected/3080a558-3dff-475d-a18d-c9660c4a1b47-kube-api-access-m6skk\") pod \"3080a558-3dff-475d-a18d-c9660c4a1b47\" (UID: \"3080a558-3dff-475d-a18d-c9660c4a1b47\") " Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.049342 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3080a558-3dff-475d-a18d-c9660c4a1b47-utilities" (OuterVolumeSpecName: "utilities") pod "3080a558-3dff-475d-a18d-c9660c4a1b47" (UID: "3080a558-3dff-475d-a18d-c9660c4a1b47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.049407 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3080a558-3dff-475d-a18d-c9660c4a1b47-catalog-content\") pod \"3080a558-3dff-475d-a18d-c9660c4a1b47\" (UID: \"3080a558-3dff-475d-a18d-c9660c4a1b47\") " Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.049777 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3080a558-3dff-475d-a18d-c9660c4a1b47-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.056805 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3080a558-3dff-475d-a18d-c9660c4a1b47-kube-api-access-m6skk" (OuterVolumeSpecName: "kube-api-access-m6skk") pod "3080a558-3dff-475d-a18d-c9660c4a1b47" (UID: "3080a558-3dff-475d-a18d-c9660c4a1b47"). InnerVolumeSpecName "kube-api-access-m6skk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.083058 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3080a558-3dff-475d-a18d-c9660c4a1b47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3080a558-3dff-475d-a18d-c9660c4a1b47" (UID: "3080a558-3dff-475d-a18d-c9660c4a1b47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.150673 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6skk\" (UniqueName: \"kubernetes.io/projected/3080a558-3dff-475d-a18d-c9660c4a1b47-kube-api-access-m6skk\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.150704 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3080a558-3dff-475d-a18d-c9660c4a1b47-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.602187 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nff6l" event={"ID":"3080a558-3dff-475d-a18d-c9660c4a1b47","Type":"ContainerDied","Data":"ffd17157eb218560fbd57c70880e45037aec0ece0278d56eef01ea741570142c"} Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.602229 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nff6l" Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.602254 4914 scope.go:117] "RemoveContainer" containerID="993d8c701ab6304cc0402a2afba56c20d310902f14aedfed44d04ebe3f59677d" Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.605655 4914 generic.go:334] "Generic (PLEG): container finished" podID="00d13ecb-5cee-479b-a638-530382bb5ec6" containerID="7417b9c6418661ec2a367ad8b5882abe13d5c3a8749f8dd720fbcf1be4308012" exitCode=0 Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.605772 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqtkq" event={"ID":"00d13ecb-5cee-479b-a638-530382bb5ec6","Type":"ContainerDied","Data":"7417b9c6418661ec2a367ad8b5882abe13d5c3a8749f8dd720fbcf1be4308012"} Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.642417 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.651989 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nff6l"] Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.654817 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nff6l"] Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.656709 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d13ecb-5cee-479b-a638-530382bb5ec6-utilities\") pod \"00d13ecb-5cee-479b-a638-530382bb5ec6\" (UID: \"00d13ecb-5cee-479b-a638-530382bb5ec6\") " Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.656782 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d13ecb-5cee-479b-a638-530382bb5ec6-catalog-content\") pod \"00d13ecb-5cee-479b-a638-530382bb5ec6\" (UID: \"00d13ecb-5cee-479b-a638-530382bb5ec6\") " Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.656815 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67rrq\" (UniqueName: \"kubernetes.io/projected/00d13ecb-5cee-479b-a638-530382bb5ec6-kube-api-access-67rrq\") pod \"00d13ecb-5cee-479b-a638-530382bb5ec6\" (UID: \"00d13ecb-5cee-479b-a638-530382bb5ec6\") " Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.657570 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d13ecb-5cee-479b-a638-530382bb5ec6-utilities" (OuterVolumeSpecName: "utilities") pod "00d13ecb-5cee-479b-a638-530382bb5ec6" (UID: "00d13ecb-5cee-479b-a638-530382bb5ec6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.663172 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d13ecb-5cee-479b-a638-530382bb5ec6-kube-api-access-67rrq" (OuterVolumeSpecName: "kube-api-access-67rrq") pod "00d13ecb-5cee-479b-a638-530382bb5ec6" (UID: "00d13ecb-5cee-479b-a638-530382bb5ec6"). InnerVolumeSpecName "kube-api-access-67rrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.757718 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67rrq\" (UniqueName: \"kubernetes.io/projected/00d13ecb-5cee-479b-a638-530382bb5ec6-kube-api-access-67rrq\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.758233 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d13ecb-5cee-479b-a638-530382bb5ec6-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.818296 4914 scope.go:117] "RemoveContainer" containerID="ae9aae64c3c5949b3e316a0919d825e3da056ade6b5941382be6cc7d23877cb3" Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.821648 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d13ecb-5cee-479b-a638-530382bb5ec6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00d13ecb-5cee-479b-a638-530382bb5ec6" (UID: "00d13ecb-5cee-479b-a638-530382bb5ec6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:48:16 crc kubenswrapper[4914]: I0127 13:48:16.858902 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d13ecb-5cee-479b-a638-530382bb5ec6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:17 crc kubenswrapper[4914]: I0127 13:48:17.614228 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqtkq" Jan 27 13:48:17 crc kubenswrapper[4914]: I0127 13:48:17.614217 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqtkq" event={"ID":"00d13ecb-5cee-479b-a638-530382bb5ec6","Type":"ContainerDied","Data":"a34d4211c0febaabbbd767973154a17632368c7ba07d84e8d31e05655adb9e01"} Jan 27 13:48:17 crc kubenswrapper[4914]: I0127 13:48:17.642257 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cqtkq"] Jan 27 13:48:17 crc kubenswrapper[4914]: I0127 13:48:17.646250 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cqtkq"] Jan 27 13:48:18 crc kubenswrapper[4914]: I0127 13:48:18.301951 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00d13ecb-5cee-479b-a638-530382bb5ec6" path="/var/lib/kubelet/pods/00d13ecb-5cee-479b-a638-530382bb5ec6/volumes" Jan 27 13:48:18 crc kubenswrapper[4914]: I0127 13:48:18.302945 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3080a558-3dff-475d-a18d-c9660c4a1b47" path="/var/lib/kubelet/pods/3080a558-3dff-475d-a18d-c9660c4a1b47/volumes" Jan 27 13:48:19 crc kubenswrapper[4914]: I0127 13:48:19.059289 4914 scope.go:117] "RemoveContainer" containerID="45e8ca55a3cb3a9f48a61ca8aeb25aac17f22eb7925fa480e110eb5eebdb8c40" Jan 27 13:48:19 crc kubenswrapper[4914]: I0127 13:48:19.095569 4914 scope.go:117] "RemoveContainer" containerID="7417b9c6418661ec2a367ad8b5882abe13d5c3a8749f8dd720fbcf1be4308012" Jan 27 13:48:19 crc kubenswrapper[4914]: I0127 13:48:19.126594 4914 scope.go:117] "RemoveContainer" containerID="ac6fa26317b525bdb0a1e807a63e5d7eb877230e8ef81323aa1696d8521c574e" Jan 27 13:48:19 crc kubenswrapper[4914]: I0127 13:48:19.147604 4914 scope.go:117] "RemoveContainer" containerID="b5999d7c6d2c7e91d925c45964035e22859cafea3de5b0f2903ada28fcc8cc29" Jan 27 13:48:19 crc kubenswrapper[4914]: I0127 13:48:19.628968 4914 generic.go:334] "Generic (PLEG): container finished" podID="dc6a9b51-d0a6-4370-94bd-342dcfa54a99" containerID="330973c79fd15b9aff414e161e42a0a5d59b3177cc62bb3cc6e2571c637e2a76" exitCode=0 Jan 27 13:48:19 crc kubenswrapper[4914]: I0127 13:48:19.629012 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pkm2z" event={"ID":"dc6a9b51-d0a6-4370-94bd-342dcfa54a99","Type":"ContainerDied","Data":"330973c79fd15b9aff414e161e42a0a5d59b3177cc62bb3cc6e2571c637e2a76"} Jan 27 13:48:19 crc kubenswrapper[4914]: I0127 13:48:19.631511 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg4gg" event={"ID":"d6cc3d29-abfa-4a3e-8251-5811e2bab91e","Type":"ContainerStarted","Data":"6ff07372258f4881de67c680c619422a7f5fe615f4a71e1d8806d160aa70ee6b"} Jan 27 13:48:19 crc kubenswrapper[4914]: I0127 13:48:19.635698 4914 generic.go:334] "Generic (PLEG): container finished" podID="1f2a8a9b-5334-4de2-9198-7677a52f8002" containerID="47d49d21deff3161d9f89f43545414051cf46cd675ae97340c3acb007a93b494" exitCode=0 Jan 27 13:48:19 crc kubenswrapper[4914]: I0127 13:48:19.635745 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8q7ck" event={"ID":"1f2a8a9b-5334-4de2-9198-7677a52f8002","Type":"ContainerDied","Data":"47d49d21deff3161d9f89f43545414051cf46cd675ae97340c3acb007a93b494"} Jan 27 13:48:19 crc kubenswrapper[4914]: I0127 13:48:19.641674 4914 generic.go:334] "Generic (PLEG): container finished" podID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" containerID="3d9cf990ae03248d15e2de8b3ff1faf2094ac36cdd545a3098025209ce754a03" exitCode=0 Jan 27 13:48:19 crc kubenswrapper[4914]: I0127 13:48:19.641709 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4lbv" event={"ID":"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d","Type":"ContainerDied","Data":"3d9cf990ae03248d15e2de8b3ff1faf2094ac36cdd545a3098025209ce754a03"} Jan 27 13:48:20 crc kubenswrapper[4914]: I0127 13:48:20.650220 4914 generic.go:334] "Generic (PLEG): container finished" podID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" containerID="6ff07372258f4881de67c680c619422a7f5fe615f4a71e1d8806d160aa70ee6b" exitCode=0 Jan 27 13:48:20 crc kubenswrapper[4914]: I0127 13:48:20.650304 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg4gg" event={"ID":"d6cc3d29-abfa-4a3e-8251-5811e2bab91e","Type":"ContainerDied","Data":"6ff07372258f4881de67c680c619422a7f5fe615f4a71e1d8806d160aa70ee6b"} Jan 27 13:48:22 crc kubenswrapper[4914]: I0127 13:48:22.663700 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4lbv" event={"ID":"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d","Type":"ContainerStarted","Data":"f8df7c6da3d1435d1d9ffcdcb978c7edfe5e4e53efe3366e02857d60e4caf230"} Jan 27 13:48:22 crc kubenswrapper[4914]: I0127 13:48:22.665672 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pkm2z" event={"ID":"dc6a9b51-d0a6-4370-94bd-342dcfa54a99","Type":"ContainerStarted","Data":"9fa6691ec60a1b026f9c868932ff96f82aed67fa8963b26e379ba43a4e13ebb2"} Jan 27 13:48:22 crc kubenswrapper[4914]: I0127 13:48:22.689995 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r4lbv" podStartSLOduration=4.798892129 podStartE2EDuration="1m45.689965414s" podCreationTimestamp="2026-01-27 13:46:37 +0000 UTC" firstStartedPulling="2026-01-27 13:46:40.65943533 +0000 UTC m=+158.971785415" lastFinishedPulling="2026-01-27 13:48:21.550508585 +0000 UTC m=+259.862858700" observedRunningTime="2026-01-27 13:48:22.688563147 +0000 UTC m=+261.000913232" watchObservedRunningTime="2026-01-27 13:48:22.689965414 +0000 UTC m=+261.002315499" Jan 27 13:48:23 crc kubenswrapper[4914]: I0127 13:48:23.691398 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pkm2z" podStartSLOduration=4.146085061 podStartE2EDuration="1m46.691380764s" podCreationTimestamp="2026-01-27 13:46:37 +0000 UTC" firstStartedPulling="2026-01-27 13:46:39.538121668 +0000 UTC m=+157.850471753" lastFinishedPulling="2026-01-27 13:48:22.083417351 +0000 UTC m=+260.395767456" observedRunningTime="2026-01-27 13:48:23.688502246 +0000 UTC m=+262.000852331" watchObservedRunningTime="2026-01-27 13:48:23.691380764 +0000 UTC m=+262.003730849" Jan 27 13:48:27 crc kubenswrapper[4914]: I0127 13:48:27.696376 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg4gg" event={"ID":"d6cc3d29-abfa-4a3e-8251-5811e2bab91e","Type":"ContainerStarted","Data":"09099fc04acf5029a51806c057cdf4a292b4c43f0426790935afde6c3781162b"} Jan 27 13:48:27 crc kubenswrapper[4914]: I0127 13:48:27.947196 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:48:27 crc kubenswrapper[4914]: I0127 13:48:27.947279 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:48:28 crc kubenswrapper[4914]: I0127 13:48:28.011109 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:48:28 crc kubenswrapper[4914]: I0127 13:48:28.734787 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zg4gg" podStartSLOduration=5.783813569 podStartE2EDuration="1m48.734771591s" podCreationTimestamp="2026-01-27 13:46:40 +0000 UTC" firstStartedPulling="2026-01-27 13:46:42.884205694 +0000 UTC m=+161.196555789" lastFinishedPulling="2026-01-27 13:48:25.835163726 +0000 UTC m=+264.147513811" observedRunningTime="2026-01-27 13:48:28.729266141 +0000 UTC m=+267.041616246" watchObservedRunningTime="2026-01-27 13:48:28.734771591 +0000 UTC m=+267.047121676" Jan 27 13:48:28 crc kubenswrapper[4914]: I0127 13:48:28.749263 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:48:29 crc kubenswrapper[4914]: I0127 13:48:29.099051 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:48:29 crc kubenswrapper[4914]: I0127 13:48:29.099087 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:48:29 crc kubenswrapper[4914]: I0127 13:48:29.140464 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:48:29 crc kubenswrapper[4914]: I0127 13:48:29.746918 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:48:30 crc kubenswrapper[4914]: I0127 13:48:30.248599 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r4lbv"] Jan 27 13:48:30 crc kubenswrapper[4914]: I0127 13:48:30.960180 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:48:30 crc kubenswrapper[4914]: I0127 13:48:30.960236 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:48:31 crc kubenswrapper[4914]: I0127 13:48:31.717079 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r4lbv" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" containerName="registry-server" containerID="cri-o://f8df7c6da3d1435d1d9ffcdcb978c7edfe5e4e53efe3366e02857d60e4caf230" gracePeriod=2 Jan 27 13:48:31 crc kubenswrapper[4914]: I0127 13:48:31.993059 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4gg" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" containerName="registry-server" probeResult="failure" output=< Jan 27 13:48:31 crc kubenswrapper[4914]: timeout: failed to connect service ":50051" within 1s Jan 27 13:48:31 crc kubenswrapper[4914]: > Jan 27 13:48:33 crc kubenswrapper[4914]: I0127 13:48:33.729784 4914 generic.go:334] "Generic (PLEG): container finished" podID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" containerID="f8df7c6da3d1435d1d9ffcdcb978c7edfe5e4e53efe3366e02857d60e4caf230" exitCode=0 Jan 27 13:48:33 crc kubenswrapper[4914]: I0127 13:48:33.729873 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4lbv" event={"ID":"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d","Type":"ContainerDied","Data":"f8df7c6da3d1435d1d9ffcdcb978c7edfe5e4e53efe3366e02857d60e4caf230"} Jan 27 13:48:35 crc kubenswrapper[4914]: I0127 13:48:35.742103 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4lbv" event={"ID":"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d","Type":"ContainerDied","Data":"88c150bb71f700224650635b84862a90af98461a2e569e523d75ace776e575f1"} Jan 27 13:48:35 crc kubenswrapper[4914]: I0127 13:48:35.742652 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88c150bb71f700224650635b84862a90af98461a2e569e523d75ace776e575f1" Jan 27 13:48:35 crc kubenswrapper[4914]: I0127 13:48:35.759315 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:48:35 crc kubenswrapper[4914]: I0127 13:48:35.910072 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxd8x\" (UniqueName: \"kubernetes.io/projected/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-kube-api-access-lxd8x\") pod \"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d\" (UID: \"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d\") " Jan 27 13:48:35 crc kubenswrapper[4914]: I0127 13:48:35.910272 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-utilities\") pod \"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d\" (UID: \"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d\") " Jan 27 13:48:35 crc kubenswrapper[4914]: I0127 13:48:35.910321 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-catalog-content\") pod \"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d\" (UID: \"f1b4a22a-26ec-4e2f-9a83-d0532ff4905d\") " Jan 27 13:48:35 crc kubenswrapper[4914]: I0127 13:48:35.911256 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-utilities" (OuterVolumeSpecName: "utilities") pod "f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" (UID: "f1b4a22a-26ec-4e2f-9a83-d0532ff4905d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:48:35 crc kubenswrapper[4914]: I0127 13:48:35.918994 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-kube-api-access-lxd8x" (OuterVolumeSpecName: "kube-api-access-lxd8x") pod "f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" (UID: "f1b4a22a-26ec-4e2f-9a83-d0532ff4905d"). InnerVolumeSpecName "kube-api-access-lxd8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.011433 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.011468 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxd8x\" (UniqueName: \"kubernetes.io/projected/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-kube-api-access-lxd8x\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.746473 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4lbv" Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.755210 4914 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.756578 4914 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.756996 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d13ecb-5cee-479b-a638-530382bb5ec6" containerName="registry-server" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.757073 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d13ecb-5cee-479b-a638-530382bb5ec6" containerName="registry-server" Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.757132 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" containerName="extract-content" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.757203 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" containerName="extract-content" Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.757324 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" containerName="registry-server" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.757390 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" containerName="registry-server" Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.757453 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3080a558-3dff-475d-a18d-c9660c4a1b47" containerName="extract-utilities" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.757510 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="3080a558-3dff-475d-a18d-c9660c4a1b47" containerName="extract-utilities" Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.757592 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d13ecb-5cee-479b-a638-530382bb5ec6" containerName="extract-content" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.757667 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d13ecb-5cee-479b-a638-530382bb5ec6" containerName="extract-content" Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.757752 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3080a558-3dff-475d-a18d-c9660c4a1b47" containerName="extract-content" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.757823 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="3080a558-3dff-475d-a18d-c9660c4a1b47" containerName="extract-content" Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.757975 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ed2d8c-e36f-4926-baa8-7251b30d758a" containerName="pruner" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.758075 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ed2d8c-e36f-4926-baa8-7251b30d758a" containerName="pruner" Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.758139 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" containerName="extract-utilities" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.758202 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" containerName="extract-utilities" Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.758263 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3080a558-3dff-475d-a18d-c9660c4a1b47" containerName="registry-server" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.758327 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="3080a558-3dff-475d-a18d-c9660c4a1b47" containerName="registry-server" Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.758396 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d13ecb-5cee-479b-a638-530382bb5ec6" containerName="extract-utilities" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.758449 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d13ecb-5cee-479b-a638-530382bb5ec6" containerName="extract-utilities" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.758607 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="3080a558-3dff-475d-a18d-c9660c4a1b47" containerName="registry-server" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.758673 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d13ecb-5cee-479b-a638-530382bb5ec6" containerName="registry-server" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.758731 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ed2d8c-e36f-4926-baa8-7251b30d758a" containerName="pruner" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.758789 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" containerName="registry-server" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.759232 4914 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.759321 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.759327 4914 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.759558 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.759575 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.759589 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.759596 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.759605 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.759613 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.759622 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.759629 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.759638 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.759645 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.759656 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.759663 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.759675 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.759682 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.759859 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250" gracePeriod=15 Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.760076 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.760099 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.760114 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.760129 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.760140 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.760148 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.760157 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.760175 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b" gracePeriod=15 Jan 27 13:48:36 crc kubenswrapper[4914]: E0127 13:48:36.760261 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.760272 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.760425 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb" gracePeriod=15 Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.760513 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc" gracePeriod=15 Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.760661 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570" gracePeriod=15 Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.763023 4914 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.798256 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.823235 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.823297 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.823324 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.823340 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.823360 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.823385 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.823399 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.823435 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.845314 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" (UID: "f1b4a22a-26ec-4e2f-9a83-d0532ff4905d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924013 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924303 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924325 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924351 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924367 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924397 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924415 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924443 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924481 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924521 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924558 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924578 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924597 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924617 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924638 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924658 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:36 crc kubenswrapper[4914]: I0127 13:48:36.924679 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:37 crc kubenswrapper[4914]: I0127 13:48:37.061185 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:37 crc kubenswrapper[4914]: I0127 13:48:37.061450 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:37 crc kubenswrapper[4914]: I0127 13:48:37.091077 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:48:37 crc kubenswrapper[4914]: E0127 13:48:37.106912 4914 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.245:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-8q7ck.188e9aa05a76d8bd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-8q7ck,UID:1f2a8a9b-5334-4de2-9198-7677a52f8002,APIVersion:v1,ResourceVersion:28205,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 13:48:37.106366653 +0000 UTC m=+275.418716738,LastTimestamp:2026-01-27 13:48:37.106366653 +0000 UTC m=+275.418716738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 13:48:37 crc kubenswrapper[4914]: I0127 13:48:37.756325 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8q7ck" event={"ID":"1f2a8a9b-5334-4de2-9198-7677a52f8002","Type":"ContainerStarted","Data":"9410434da2d572ca598a13ba1d0a5bc1bf4b83fc8e566efd1ec6e46441f2282d"} Jan 27 13:48:37 crc kubenswrapper[4914]: I0127 13:48:37.759797 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a2429af8262c6c54be1e34281bd595a0d5d9966d565e52bf81b6c066d52ac881"} Jan 27 13:48:38 crc kubenswrapper[4914]: E0127 13:48:38.075859 4914 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.245:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-8q7ck.188e9aa05a76d8bd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-8q7ck,UID:1f2a8a9b-5334-4de2-9198-7677a52f8002,APIVersion:v1,ResourceVersion:28205,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 13:48:37.106366653 +0000 UTC m=+275.418716738,LastTimestamp:2026-01-27 13:48:37.106366653 +0000 UTC m=+275.418716738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 13:48:38 crc kubenswrapper[4914]: I0127 13:48:38.768250 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 13:48:38 crc kubenswrapper[4914]: I0127 13:48:38.770244 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 13:48:38 crc kubenswrapper[4914]: I0127 13:48:38.771506 4914 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570" exitCode=2 Jan 27 13:48:39 crc kubenswrapper[4914]: E0127 13:48:39.737631 4914 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: E0127 13:48:39.738957 4914 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: E0127 13:48:39.739504 4914 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: E0127 13:48:39.739875 4914 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: E0127 13:48:39.740408 4914 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:39.740447 4914 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 13:48:46 crc kubenswrapper[4914]: E0127 13:48:39.740745 4914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="200ms" Jan 27 13:48:46 crc kubenswrapper[4914]: E0127 13:48:39.942035 4914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="400ms" Jan 27 13:48:46 crc kubenswrapper[4914]: E0127 13:48:40.342930 4914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="800ms" Jan 27 13:48:46 crc kubenswrapper[4914]: E0127 13:48:40.784516 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:48:40Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:48:40Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:48:40Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:48:40Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:024b1ed0676c2e11f6a319392c82e7acd0ceeae31ca00b202307c4d86a796b20\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ada03173793960eaa0e4263282fcbf5af3dea8aaf2c3b0d864906108db062e8a\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1672061160},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:200a8645cdce6d8fa5a43098c67b88945e73bd2cae92ca7b61297d44fdc66978\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:fb0551b0a119afe949321afb7bfcae7fd008137fa5b974a8a4c36e3d937e8bce\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201577188},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7fcd05db1e385722044c25252c5c1a9c8cc3e5006f653c1603be226a664aa127\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:801d959017e9084bffe0b98a80a2a973823e6fe49ea08babc7fa62192bb7f175\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1185919222},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:6d91aecdb391dd0cbb56f2b6335674bd2b4a25c63f0b9e893ba8977a71be3c0d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:98739198606db13baf3fa39b12298669778a619dff80b9b5d51987d7f76056c9\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180173538},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: E0127 13:48:40.784924 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: E0127 13:48:40.785120 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: E0127 13:48:40.785335 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: E0127 13:48:40.785551 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: E0127 13:48:40.785568 4914 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.801333 4914 generic.go:334] "Generic (PLEG): container finished" podID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" containerID="13b9a3dd8837d07a3a149b9b782edca6306d2752ee4e1c8196a3794907816574" exitCode=0 Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.801431 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"59c2b004-9b1e-40e8-82dc-5b8361f8627e","Type":"ContainerDied","Data":"13b9a3dd8837d07a3a149b9b782edca6306d2752ee4e1c8196a3794907816574"} Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.802919 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.803151 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.803331 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.806473 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.809720 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.810669 4914 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b" exitCode=0 Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.810686 4914 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc" exitCode=0 Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.810695 4914 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb" exitCode=0 Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.810701 4914 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250" exitCode=0 Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.810763 4914 scope.go:117] "RemoveContainer" containerID="fbb8f800d7dcd59a5be156405c8a67be80b9019007186dae2ec0839b54360ae9" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.822497 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a2f90e9e8f1a2ab51054e59eff137619b40ac960a84688741267390b724b051b"} Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.823159 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.823744 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.824287 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.824612 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.998556 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.999012 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.999261 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.999595 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:40.999783 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.000087 4914 status_manager.go:851] "Failed to get status for pod" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" pod="openshift-marketplace/redhat-operators-zg4gg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zg4gg\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.044644 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.045352 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.045803 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.046053 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.046230 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.046402 4914 status_manager.go:851] "Failed to get status for pod" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" pod="openshift-marketplace/redhat-operators-zg4gg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zg4gg\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: E0127 13:48:41.143942 4914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="1.6s" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.755465 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.756346 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.756738 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.756944 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.757168 4914 status_manager.go:851] "Failed to get status for pod" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" pod="openshift-marketplace/redhat-operators-zg4gg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zg4gg\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.757366 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.757605 4914 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.757793 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.830145 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.830953 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.830978 4914 scope.go:117] "RemoveContainer" containerID="5d89bcfe09437e0a9038efccefc4d8756f87507d828fc9e9e366fd2d49014a4b" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.831875 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.832069 4914 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.832296 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.832517 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.832773 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.833121 4914 status_manager.go:851] "Failed to get status for pod" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" pod="openshift-marketplace/redhat-operators-zg4gg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zg4gg\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.833435 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.833624 4914 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.833942 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.834151 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.834771 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.835059 4914 status_manager.go:851] "Failed to get status for pod" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" pod="openshift-marketplace/redhat-operators-zg4gg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zg4gg\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.859899 4914 scope.go:117] "RemoveContainer" containerID="9048c16eaee03511e68efe1640eb3356da970df246311710670602f973b0fbfc" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.876537 4914 scope.go:117] "RemoveContainer" containerID="772e35f81850de77f3a63d0ce7056c9397a8419a1bd95e92603434ad93b62bbb" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.889864 4914 scope.go:117] "RemoveContainer" containerID="897d0e43b1be989219a038390f60054b85634b14e4d225185d0cd1d6e1e4c570" Jan 27 13:48:46 crc kubenswrapper[4914]: I0127 13:48:41.895421 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:41.895508 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:41.895557 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:41.896629 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:41.896693 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:41.896707 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:41.903983 4914 scope.go:117] "RemoveContainer" containerID="dbb0f603fde1eb0cb45c30d98105d2d071f26940989889707446d70b2716d250" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:41.996700 4914 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:41.996723 4914 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:41.996731 4914 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.158012 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.158441 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.158762 4914 status_manager.go:851] "Failed to get status for pod" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" pod="openshift-marketplace/redhat-operators-zg4gg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zg4gg\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.159262 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.160181 4914 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.160476 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.180599 4914 scope.go:117] "RemoveContainer" containerID="4be90fa9da3318c1c24810be4271daa06d28ba3bd61821cfcd7afbdcb84a6221" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.239661 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.240214 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.240499 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.240815 4914 status_manager.go:851] "Failed to get status for pod" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" pod="openshift-marketplace/redhat-operators-zg4gg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zg4gg\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.241209 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.241454 4914 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.241686 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.297312 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.297515 4914 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.297686 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.298098 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.298544 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.298783 4914 status_manager.go:851] "Failed to get status for pod" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" pod="openshift-marketplace/redhat-operators-zg4gg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zg4gg\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.301198 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.401333 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/59c2b004-9b1e-40e8-82dc-5b8361f8627e-var-lock\") pod \"59c2b004-9b1e-40e8-82dc-5b8361f8627e\" (UID: \"59c2b004-9b1e-40e8-82dc-5b8361f8627e\") " Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.401431 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59c2b004-9b1e-40e8-82dc-5b8361f8627e-kubelet-dir\") pod \"59c2b004-9b1e-40e8-82dc-5b8361f8627e\" (UID: \"59c2b004-9b1e-40e8-82dc-5b8361f8627e\") " Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.401468 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59c2b004-9b1e-40e8-82dc-5b8361f8627e-kube-api-access\") pod \"59c2b004-9b1e-40e8-82dc-5b8361f8627e\" (UID: \"59c2b004-9b1e-40e8-82dc-5b8361f8627e\") " Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.403082 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59c2b004-9b1e-40e8-82dc-5b8361f8627e-var-lock" (OuterVolumeSpecName: "var-lock") pod "59c2b004-9b1e-40e8-82dc-5b8361f8627e" (UID: "59c2b004-9b1e-40e8-82dc-5b8361f8627e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.403178 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59c2b004-9b1e-40e8-82dc-5b8361f8627e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "59c2b004-9b1e-40e8-82dc-5b8361f8627e" (UID: "59c2b004-9b1e-40e8-82dc-5b8361f8627e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.408072 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c2b004-9b1e-40e8-82dc-5b8361f8627e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "59c2b004-9b1e-40e8-82dc-5b8361f8627e" (UID: "59c2b004-9b1e-40e8-82dc-5b8361f8627e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.502405 4914 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59c2b004-9b1e-40e8-82dc-5b8361f8627e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.502429 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59c2b004-9b1e-40e8-82dc-5b8361f8627e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.502440 4914 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/59c2b004-9b1e-40e8-82dc-5b8361f8627e-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 13:48:47 crc kubenswrapper[4914]: E0127 13:48:42.744338 4914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="3.2s" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.841785 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"59c2b004-9b1e-40e8-82dc-5b8361f8627e","Type":"ContainerDied","Data":"1d012b75e57a4055fa16e177abb53aa0fae9dd6e5748642d418a7f07abecbe2b"} Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.841853 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d012b75e57a4055fa16e177abb53aa0fae9dd6e5748642d418a7f07abecbe2b" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.841847 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.856610 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.856978 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.857214 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.857406 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: I0127 13:48:42.858033 4914 status_manager.go:851] "Failed to get status for pod" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" pod="openshift-marketplace/redhat-operators-zg4gg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zg4gg\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:47 crc kubenswrapper[4914]: E0127 13:48:45.945875 4914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="6.4s" Jan 27 13:48:48 crc kubenswrapper[4914]: E0127 13:48:48.076789 4914 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.245:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-8q7ck.188e9aa05a76d8bd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-8q7ck,UID:1f2a8a9b-5334-4de2-9198-7677a52f8002,APIVersion:v1,ResourceVersion:28205,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 13:48:37.106366653 +0000 UTC m=+275.418716738,LastTimestamp:2026-01-27 13:48:37.106366653 +0000 UTC m=+275.418716738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 13:48:48 crc kubenswrapper[4914]: I0127 13:48:48.368418 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:48:48 crc kubenswrapper[4914]: I0127 13:48:48.368466 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:48:48 crc kubenswrapper[4914]: I0127 13:48:48.403861 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:48:48 crc kubenswrapper[4914]: I0127 13:48:48.404457 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:48 crc kubenswrapper[4914]: I0127 13:48:48.405461 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:48 crc kubenswrapper[4914]: I0127 13:48:48.405774 4914 status_manager.go:851] "Failed to get status for pod" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" pod="openshift-marketplace/redhat-operators-zg4gg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zg4gg\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:48 crc kubenswrapper[4914]: I0127 13:48:48.406149 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:48 crc kubenswrapper[4914]: I0127 13:48:48.406410 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:48 crc kubenswrapper[4914]: I0127 13:48:48.920556 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:48:48 crc kubenswrapper[4914]: I0127 13:48:48.921242 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:48 crc kubenswrapper[4914]: I0127 13:48:48.921550 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:48 crc kubenswrapper[4914]: I0127 13:48:48.921807 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:48 crc kubenswrapper[4914]: I0127 13:48:48.922063 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:48 crc kubenswrapper[4914]: I0127 13:48:48.922273 4914 status_manager.go:851] "Failed to get status for pod" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" pod="openshift-marketplace/redhat-operators-zg4gg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zg4gg\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:51 crc kubenswrapper[4914]: E0127 13:48:51.083168 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:48:51Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:48:51Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:48:51Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:48:51Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:024b1ed0676c2e11f6a319392c82e7acd0ceeae31ca00b202307c4d86a796b20\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ada03173793960eaa0e4263282fcbf5af3dea8aaf2c3b0d864906108db062e8a\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1672061160},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:200a8645cdce6d8fa5a43098c67b88945e73bd2cae92ca7b61297d44fdc66978\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:fb0551b0a119afe949321afb7bfcae7fd008137fa5b974a8a4c36e3d937e8bce\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201577188},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7fcd05db1e385722044c25252c5c1a9c8cc3e5006f653c1603be226a664aa127\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:801d959017e9084bffe0b98a80a2a973823e6fe49ea08babc7fa62192bb7f175\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1185919222},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:6d91aecdb391dd0cbb56f2b6335674bd2b4a25c63f0b9e893ba8977a71be3c0d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:98739198606db13baf3fa39b12298669778a619dff80b9b5d51987d7f76056c9\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180173538},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:51 crc kubenswrapper[4914]: E0127 13:48:51.084142 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:51 crc kubenswrapper[4914]: E0127 13:48:51.084459 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:51 crc kubenswrapper[4914]: E0127 13:48:51.084615 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:51 crc kubenswrapper[4914]: E0127 13:48:51.084749 4914 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:51 crc kubenswrapper[4914]: E0127 13:48:51.084762 4914 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:48:51 crc kubenswrapper[4914]: I0127 13:48:51.897979 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 13:48:51 crc kubenswrapper[4914]: I0127 13:48:51.898036 4914 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7" exitCode=1 Jan 27 13:48:51 crc kubenswrapper[4914]: I0127 13:48:51.898073 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7"} Jan 27 13:48:51 crc kubenswrapper[4914]: I0127 13:48:51.898600 4914 scope.go:117] "RemoveContainer" containerID="0e132631f8e28704624daaa128bbf7b1f539e3c1fd5928ae868d84256a9b11d7" Jan 27 13:48:51 crc kubenswrapper[4914]: I0127 13:48:51.899067 4914 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:51 crc kubenswrapper[4914]: I0127 13:48:51.899541 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:51 crc kubenswrapper[4914]: I0127 13:48:51.899813 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:51 crc kubenswrapper[4914]: I0127 13:48:51.900215 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:51 crc kubenswrapper[4914]: I0127 13:48:51.900466 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:51 crc kubenswrapper[4914]: I0127 13:48:51.900757 4914 status_manager.go:851] "Failed to get status for pod" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" pod="openshift-marketplace/redhat-operators-zg4gg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zg4gg\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.294359 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.296887 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.297253 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.297665 4914 status_manager.go:851] "Failed to get status for pod" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" pod="openshift-marketplace/redhat-operators-zg4gg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zg4gg\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.298067 4914 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.298246 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.298395 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.298567 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.298769 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.299091 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.299363 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.299821 4914 status_manager.go:851] "Failed to get status for pod" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" pod="openshift-marketplace/redhat-operators-zg4gg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zg4gg\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.300176 4914 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.318711 4914 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b979f7f-2cfd-417e-aa1f-6108ebb77e17" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.318749 4914 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b979f7f-2cfd-417e-aa1f-6108ebb77e17" Jan 27 13:48:52 crc kubenswrapper[4914]: E0127 13:48:52.319250 4914 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.319701 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:52 crc kubenswrapper[4914]: W0127 13:48:52.336707 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-dd3f7c80f838b7c63fe7529e3e9017b639453dd6e1663aa85d1fa03b0d1ec1cf WatchSource:0}: Error finding container dd3f7c80f838b7c63fe7529e3e9017b639453dd6e1663aa85d1fa03b0d1ec1cf: Status 404 returned error can't find the container with id dd3f7c80f838b7c63fe7529e3e9017b639453dd6e1663aa85d1fa03b0d1ec1cf Jan 27 13:48:52 crc kubenswrapper[4914]: E0127 13:48:52.347478 4914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="7s" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.904913 4914 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f23cfea58f6d5e469c52b6d6bbfb16ece6bc7813a0cc43b51a00a168fe14fab2" exitCode=0 Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.905005 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f23cfea58f6d5e469c52b6d6bbfb16ece6bc7813a0cc43b51a00a168fe14fab2"} Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.905288 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dd3f7c80f838b7c63fe7529e3e9017b639453dd6e1663aa85d1fa03b0d1ec1cf"} Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.905544 4914 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b979f7f-2cfd-417e-aa1f-6108ebb77e17" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.905564 4914 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b979f7f-2cfd-417e-aa1f-6108ebb77e17" Jan 27 13:48:52 crc kubenswrapper[4914]: E0127 13:48:52.906108 4914 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.906338 4914 status_manager.go:851] "Failed to get status for pod" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" pod="openshift-marketplace/redhat-operators-zg4gg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zg4gg\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.906760 4914 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.907166 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.907631 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.908090 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.908372 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.909949 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.910000 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c9ecc23bd5c421205c43be4efb3273c1d28c9214898cd45409e32dfb8281a152"} Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.910593 4914 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.910919 4914 status_manager.go:851] "Failed to get status for pod" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.911260 4914 status_manager.go:851] "Failed to get status for pod" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" pod="openshift-marketplace/community-operators-r4lbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r4lbv\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.911677 4914 status_manager.go:851] "Failed to get status for pod" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" pod="openshift-marketplace/redhat-operators-zg4gg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zg4gg\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.911977 4914 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:52 crc kubenswrapper[4914]: I0127 13:48:52.912271 4914 status_manager.go:851] "Failed to get status for pod" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" pod="openshift-marketplace/certified-operators-8q7ck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8q7ck\": dial tcp 38.129.56.245:6443: connect: connection refused" Jan 27 13:48:54 crc kubenswrapper[4914]: I0127 13:48:54.925740 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5ffa29ecff803664837fdd72fd75d3c8e9d067c4eb32027ac90ba524f371547b"} Jan 27 13:48:54 crc kubenswrapper[4914]: I0127 13:48:54.926240 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e122798811e2ee453d98536ab7d89035be494148a8d97b40da49df36290e777e"} Jan 27 13:48:54 crc kubenswrapper[4914]: I0127 13:48:54.926254 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1a6e6f0efc4337dc337cc4911c3acfb5ae8d913fcdde3e91c647b7ada8e59de9"} Jan 27 13:48:55 crc kubenswrapper[4914]: I0127 13:48:55.942151 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d42ce8302d8fec687e269f83614b91214ae37145791e716bad261f2347d53c6a"} Jan 27 13:48:55 crc kubenswrapper[4914]: I0127 13:48:55.942455 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e1a3857926bbbe4fdc566411f41c1ad4eccaa05a9a6237a6881ec2a99dcbb747"} Jan 27 13:48:55 crc kubenswrapper[4914]: I0127 13:48:55.942451 4914 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b979f7f-2cfd-417e-aa1f-6108ebb77e17" Jan 27 13:48:55 crc kubenswrapper[4914]: I0127 13:48:55.942474 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:55 crc kubenswrapper[4914]: I0127 13:48:55.942483 4914 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b979f7f-2cfd-417e-aa1f-6108ebb77e17" Jan 27 13:48:57 crc kubenswrapper[4914]: I0127 13:48:57.319766 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:57 crc kubenswrapper[4914]: I0127 13:48:57.320123 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:57 crc kubenswrapper[4914]: I0127 13:48:57.325418 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:48:57 crc kubenswrapper[4914]: I0127 13:48:57.440554 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:48:58 crc kubenswrapper[4914]: I0127 13:48:58.943089 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:48:58 crc kubenswrapper[4914]: I0127 13:48:58.946904 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:49:00 crc kubenswrapper[4914]: I0127 13:49:00.953364 4914 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:49:01 crc kubenswrapper[4914]: I0127 13:49:01.970675 4914 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b979f7f-2cfd-417e-aa1f-6108ebb77e17" Jan 27 13:49:01 crc kubenswrapper[4914]: I0127 13:49:01.970704 4914 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b979f7f-2cfd-417e-aa1f-6108ebb77e17" Jan 27 13:49:01 crc kubenswrapper[4914]: I0127 13:49:01.975414 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:49:02 crc kubenswrapper[4914]: I0127 13:49:02.109485 4914 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 13:49:02 crc kubenswrapper[4914]: I0127 13:49:02.321360 4914 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="81e0724a-3d31-4c5e-aefc-e0ae289bbcd7" Jan 27 13:49:02 crc kubenswrapper[4914]: I0127 13:49:02.977080 4914 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b979f7f-2cfd-417e-aa1f-6108ebb77e17" Jan 27 13:49:02 crc kubenswrapper[4914]: I0127 13:49:02.977110 4914 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b979f7f-2cfd-417e-aa1f-6108ebb77e17" Jan 27 13:49:02 crc kubenswrapper[4914]: I0127 13:49:02.980675 4914 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="81e0724a-3d31-4c5e-aefc-e0ae289bbcd7" Jan 27 13:49:07 crc kubenswrapper[4914]: I0127 13:49:07.444980 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:49:10 crc kubenswrapper[4914]: I0127 13:49:10.203083 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 13:49:10 crc kubenswrapper[4914]: I0127 13:49:10.759501 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 13:49:10 crc kubenswrapper[4914]: I0127 13:49:10.808728 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 13:49:10 crc kubenswrapper[4914]: I0127 13:49:10.832156 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 13:49:11 crc kubenswrapper[4914]: I0127 13:49:11.087508 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 13:49:11 crc kubenswrapper[4914]: I0127 13:49:11.362799 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 13:49:11 crc kubenswrapper[4914]: I0127 13:49:11.613786 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 13:49:11 crc kubenswrapper[4914]: I0127 13:49:11.809685 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 13:49:12 crc kubenswrapper[4914]: I0127 13:49:12.022448 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 13:49:12 crc kubenswrapper[4914]: I0127 13:49:12.194346 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 13:49:12 crc kubenswrapper[4914]: I0127 13:49:12.290383 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 13:49:12 crc kubenswrapper[4914]: I0127 13:49:12.363458 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 13:49:12 crc kubenswrapper[4914]: I0127 13:49:12.391074 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 13:49:12 crc kubenswrapper[4914]: I0127 13:49:12.403320 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 13:49:12 crc kubenswrapper[4914]: I0127 13:49:12.559010 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 13:49:12 crc kubenswrapper[4914]: I0127 13:49:12.598109 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 13:49:12 crc kubenswrapper[4914]: I0127 13:49:12.711017 4914 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 13:49:12 crc kubenswrapper[4914]: I0127 13:49:12.739313 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 13:49:12 crc kubenswrapper[4914]: I0127 13:49:12.783745 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 13:49:12 crc kubenswrapper[4914]: I0127 13:49:12.786771 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 13:49:12 crc kubenswrapper[4914]: I0127 13:49:12.867743 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 13:49:13 crc kubenswrapper[4914]: I0127 13:49:13.047097 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 13:49:13 crc kubenswrapper[4914]: I0127 13:49:13.161356 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 13:49:13 crc kubenswrapper[4914]: I0127 13:49:13.341523 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 13:49:13 crc kubenswrapper[4914]: I0127 13:49:13.540253 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 13:49:13 crc kubenswrapper[4914]: I0127 13:49:13.632137 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 13:49:13 crc kubenswrapper[4914]: I0127 13:49:13.640142 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 13:49:13 crc kubenswrapper[4914]: I0127 13:49:13.687391 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 13:49:13 crc kubenswrapper[4914]: I0127 13:49:13.793721 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 13:49:13 crc kubenswrapper[4914]: I0127 13:49:13.809337 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 13:49:13 crc kubenswrapper[4914]: I0127 13:49:13.932143 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 13:49:14 crc kubenswrapper[4914]: I0127 13:49:14.030885 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 13:49:14 crc kubenswrapper[4914]: I0127 13:49:14.062900 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 13:49:14 crc kubenswrapper[4914]: I0127 13:49:14.064976 4914 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 13:49:14 crc kubenswrapper[4914]: I0127 13:49:14.219163 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 13:49:14 crc kubenswrapper[4914]: I0127 13:49:14.251852 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 13:49:14 crc kubenswrapper[4914]: I0127 13:49:14.445778 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 13:49:14 crc kubenswrapper[4914]: I0127 13:49:14.470684 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 13:49:14 crc kubenswrapper[4914]: I0127 13:49:14.546860 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 13:49:14 crc kubenswrapper[4914]: I0127 13:49:14.597107 4914 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 13:49:14 crc kubenswrapper[4914]: I0127 13:49:14.655004 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 13:49:14 crc kubenswrapper[4914]: I0127 13:49:14.743898 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 13:49:14 crc kubenswrapper[4914]: I0127 13:49:14.983008 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 13:49:15 crc kubenswrapper[4914]: I0127 13:49:15.077950 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 13:49:15 crc kubenswrapper[4914]: I0127 13:49:15.149877 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 13:49:15 crc kubenswrapper[4914]: I0127 13:49:15.395167 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 13:49:15 crc kubenswrapper[4914]: I0127 13:49:15.408799 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 13:49:15 crc kubenswrapper[4914]: I0127 13:49:15.450965 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 13:49:15 crc kubenswrapper[4914]: I0127 13:49:15.563461 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 13:49:15 crc kubenswrapper[4914]: I0127 13:49:15.595536 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 13:49:15 crc kubenswrapper[4914]: I0127 13:49:15.616573 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 13:49:15 crc kubenswrapper[4914]: I0127 13:49:15.681593 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 13:49:15 crc kubenswrapper[4914]: I0127 13:49:15.711897 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 13:49:15 crc kubenswrapper[4914]: I0127 13:49:15.796781 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 13:49:15 crc kubenswrapper[4914]: I0127 13:49:15.954044 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 13:49:15 crc kubenswrapper[4914]: I0127 13:49:15.994315 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.016920 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.030780 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.148524 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.191581 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.252167 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.321298 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.328942 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.414525 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.418727 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.431996 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.480194 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.518179 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.565878 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.593994 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.602821 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.610024 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.707734 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.707997 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.733881 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.878528 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.889816 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.898519 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.963450 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 13:49:16 crc kubenswrapper[4914]: I0127 13:49:16.988451 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 13:49:17 crc kubenswrapper[4914]: I0127 13:49:17.060537 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 13:49:17 crc kubenswrapper[4914]: I0127 13:49:17.251065 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 13:49:17 crc kubenswrapper[4914]: I0127 13:49:17.327943 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 13:49:17 crc kubenswrapper[4914]: I0127 13:49:17.338371 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 13:49:17 crc kubenswrapper[4914]: I0127 13:49:17.488810 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 13:49:17 crc kubenswrapper[4914]: I0127 13:49:17.504493 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 13:49:17 crc kubenswrapper[4914]: I0127 13:49:17.582955 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 13:49:17 crc kubenswrapper[4914]: I0127 13:49:17.762602 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 13:49:17 crc kubenswrapper[4914]: I0127 13:49:17.767945 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 13:49:17 crc kubenswrapper[4914]: I0127 13:49:17.801553 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 13:49:17 crc kubenswrapper[4914]: I0127 13:49:17.802367 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 13:49:17 crc kubenswrapper[4914]: I0127 13:49:17.845814 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 13:49:17 crc kubenswrapper[4914]: I0127 13:49:17.905898 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 13:49:17 crc kubenswrapper[4914]: I0127 13:49:17.909363 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.068696 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.169101 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.234912 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.361569 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.412382 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.447113 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.524403 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.714926 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.757197 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.765587 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.818801 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.819495 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.827529 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.867898 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.881046 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.923335 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.949307 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.956901 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 13:49:18 crc kubenswrapper[4914]: I0127 13:49:18.959726 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 13:49:19 crc kubenswrapper[4914]: I0127 13:49:19.045248 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 13:49:19 crc kubenswrapper[4914]: I0127 13:49:19.086808 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 13:49:19 crc kubenswrapper[4914]: I0127 13:49:19.129380 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 13:49:19 crc kubenswrapper[4914]: I0127 13:49:19.146388 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 13:49:19 crc kubenswrapper[4914]: I0127 13:49:19.304472 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 13:49:19 crc kubenswrapper[4914]: I0127 13:49:19.402740 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 13:49:19 crc kubenswrapper[4914]: I0127 13:49:19.428608 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 13:49:19 crc kubenswrapper[4914]: I0127 13:49:19.482061 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 13:49:19 crc kubenswrapper[4914]: I0127 13:49:19.536059 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 13:49:19 crc kubenswrapper[4914]: I0127 13:49:19.554688 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 13:49:19 crc kubenswrapper[4914]: I0127 13:49:19.599588 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 13:49:19 crc kubenswrapper[4914]: I0127 13:49:19.685366 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 13:49:19 crc kubenswrapper[4914]: I0127 13:49:19.918259 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 13:49:19 crc kubenswrapper[4914]: I0127 13:49:19.974071 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 13:49:19 crc kubenswrapper[4914]: I0127 13:49:19.977306 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.001073 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.030964 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.111394 4914 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.137158 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.150717 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.191265 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.240954 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.315445 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.353968 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.401961 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.506766 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.510302 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.715384 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.775956 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.777437 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.795593 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.834414 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.858939 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.904940 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.940520 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 13:49:20 crc kubenswrapper[4914]: I0127 13:49:20.957823 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.025789 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.058386 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.139099 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.275362 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.335634 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.346459 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.366385 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.375488 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.394472 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.439175 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.721966 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.813613 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.817758 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.851471 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.878821 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.925748 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 13:49:21 crc kubenswrapper[4914]: I0127 13:49:21.993751 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.026921 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.043535 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.148302 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.174886 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.248384 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.281524 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.341818 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.352671 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.376933 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.412967 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.437113 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.447087 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.474700 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.490804 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.514953 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.668926 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.727105 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.785033 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.823292 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.917655 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.945934 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 13:49:22 crc kubenswrapper[4914]: I0127 13:49:22.963632 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 13:49:23 crc kubenswrapper[4914]: I0127 13:49:23.110808 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 13:49:23 crc kubenswrapper[4914]: I0127 13:49:23.117559 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 13:49:23 crc kubenswrapper[4914]: I0127 13:49:23.127433 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 13:49:23 crc kubenswrapper[4914]: I0127 13:49:23.224871 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 13:49:23 crc kubenswrapper[4914]: I0127 13:49:23.225870 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 13:49:23 crc kubenswrapper[4914]: I0127 13:49:23.272453 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 13:49:23 crc kubenswrapper[4914]: I0127 13:49:23.401971 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 13:49:23 crc kubenswrapper[4914]: I0127 13:49:23.507277 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 13:49:23 crc kubenswrapper[4914]: I0127 13:49:23.598665 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 13:49:23 crc kubenswrapper[4914]: I0127 13:49:23.928447 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 13:49:23 crc kubenswrapper[4914]: I0127 13:49:23.947163 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 13:49:24 crc kubenswrapper[4914]: I0127 13:49:24.100807 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 13:49:24 crc kubenswrapper[4914]: I0127 13:49:24.136289 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 13:49:24 crc kubenswrapper[4914]: I0127 13:49:24.229565 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 13:49:24 crc kubenswrapper[4914]: I0127 13:49:24.382975 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 13:49:24 crc kubenswrapper[4914]: I0127 13:49:24.542258 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 13:49:24 crc kubenswrapper[4914]: I0127 13:49:24.698152 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 13:49:24 crc kubenswrapper[4914]: I0127 13:49:24.700152 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 13:49:24 crc kubenswrapper[4914]: I0127 13:49:24.802611 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 13:49:24 crc kubenswrapper[4914]: I0127 13:49:24.819775 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 13:49:24 crc kubenswrapper[4914]: I0127 13:49:24.978517 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 13:49:25 crc kubenswrapper[4914]: I0127 13:49:25.353671 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 13:49:25 crc kubenswrapper[4914]: I0127 13:49:25.401632 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 13:49:25 crc kubenswrapper[4914]: I0127 13:49:25.455047 4914 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 13:49:25 crc kubenswrapper[4914]: I0127 13:49:25.486245 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 13:49:25 crc kubenswrapper[4914]: I0127 13:49:25.586410 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 13:49:25 crc kubenswrapper[4914]: I0127 13:49:25.656729 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 13:49:25 crc kubenswrapper[4914]: I0127 13:49:25.691540 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 13:49:25 crc kubenswrapper[4914]: I0127 13:49:25.750539 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 13:49:25 crc kubenswrapper[4914]: I0127 13:49:25.751962 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 13:49:25 crc kubenswrapper[4914]: I0127 13:49:25.907474 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 13:49:25 crc kubenswrapper[4914]: I0127 13:49:25.981447 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 13:49:26 crc kubenswrapper[4914]: I0127 13:49:26.036991 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 13:49:26 crc kubenswrapper[4914]: I0127 13:49:26.324716 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 13:49:26 crc kubenswrapper[4914]: I0127 13:49:26.806029 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 13:49:35 crc kubenswrapper[4914]: I0127 13:49:35.765052 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 13:49:38 crc kubenswrapper[4914]: I0127 13:49:38.094902 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 13:49:38 crc kubenswrapper[4914]: I0127 13:49:38.164727 4914 generic.go:334] "Generic (PLEG): container finished" podID="7656576b-aeae-4b15-b2ab-18658770a1e5" containerID="1ae8590c8f2e12da5c32dca51ec53fe0bb5f6771669d305ca3d17672f3571794" exitCode=0 Jan 27 13:49:38 crc kubenswrapper[4914]: I0127 13:49:38.164776 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" event={"ID":"7656576b-aeae-4b15-b2ab-18658770a1e5","Type":"ContainerDied","Data":"1ae8590c8f2e12da5c32dca51ec53fe0bb5f6771669d305ca3d17672f3571794"} Jan 27 13:49:38 crc kubenswrapper[4914]: I0127 13:49:38.165278 4914 scope.go:117] "RemoveContainer" containerID="1ae8590c8f2e12da5c32dca51ec53fe0bb5f6771669d305ca3d17672f3571794" Jan 27 13:49:38 crc kubenswrapper[4914]: I0127 13:49:38.720865 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 13:49:39 crc kubenswrapper[4914]: I0127 13:49:39.171418 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" event={"ID":"7656576b-aeae-4b15-b2ab-18658770a1e5","Type":"ContainerStarted","Data":"3f2da090eeac987e00b02a97f9eeb0671e65f712b676b2730f227452feeda9a4"} Jan 27 13:49:39 crc kubenswrapper[4914]: I0127 13:49:39.171881 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:49:39 crc kubenswrapper[4914]: I0127 13:49:39.174317 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:49:40 crc kubenswrapper[4914]: I0127 13:49:40.139267 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 13:49:43 crc kubenswrapper[4914]: I0127 13:49:43.249729 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 13:49:43 crc kubenswrapper[4914]: I0127 13:49:43.532816 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 13:49:44 crc kubenswrapper[4914]: I0127 13:49:44.241216 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 13:49:44 crc kubenswrapper[4914]: I0127 13:49:44.770246 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 13:49:45 crc kubenswrapper[4914]: I0127 13:49:45.081700 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 13:49:45 crc kubenswrapper[4914]: I0127 13:49:45.582552 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 13:49:45 crc kubenswrapper[4914]: I0127 13:49:45.715713 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 13:49:46 crc kubenswrapper[4914]: I0127 13:49:46.344555 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 13:49:46 crc kubenswrapper[4914]: I0127 13:49:46.877306 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 13:49:47 crc kubenswrapper[4914]: I0127 13:49:47.255854 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 13:49:47 crc kubenswrapper[4914]: I0127 13:49:47.962337 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 13:49:48 crc kubenswrapper[4914]: I0127 13:49:48.573632 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 13:49:49 crc kubenswrapper[4914]: I0127 13:49:49.610848 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.436030 4914 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.437393 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=74.437380579 podStartE2EDuration="1m14.437380579s" podCreationTimestamp="2026-01-27 13:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:49:01.032168217 +0000 UTC m=+299.344518322" watchObservedRunningTime="2026-01-27 13:49:50.437380579 +0000 UTC m=+348.749730664" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.438866 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8q7ck" podStartSLOduration=78.680657184 podStartE2EDuration="3m12.43886198s" podCreationTimestamp="2026-01-27 13:46:38 +0000 UTC" firstStartedPulling="2026-01-27 13:46:39.554125949 +0000 UTC m=+157.866476034" lastFinishedPulling="2026-01-27 13:48:33.312330745 +0000 UTC m=+271.624680830" observedRunningTime="2026-01-27 13:49:01.020386287 +0000 UTC m=+299.332736362" watchObservedRunningTime="2026-01-27 13:49:50.43886198 +0000 UTC m=+348.751212065" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.440248 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/community-operators-r4lbv"] Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.440358 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.440377 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7","openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd"] Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.440538 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" podUID="40c6c52c-17b5-4a94-b18f-6ce669d8409d" containerName="route-controller-manager" containerID="cri-o://212ba8facc060da58b015b9d43487970530f24e472c0b124f2b408031716990f" gracePeriod=30 Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.440799 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" podUID="51d3fdf2-9bd8-4eac-86ef-51d793f17e03" containerName="controller-manager" containerID="cri-o://bef27b7e78605bfcf9b7199a842427dc3bfb1f5e9ae36995d34ee28992365e94" gracePeriod=30 Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.469685 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=50.469667148 podStartE2EDuration="50.469667148s" podCreationTimestamp="2026-01-27 13:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:49:50.465671339 +0000 UTC m=+348.778021434" watchObservedRunningTime="2026-01-27 13:49:50.469667148 +0000 UTC m=+348.782017233" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.812868 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.818027 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.845146 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5946dcc6c8-j442s"] Jan 27 13:49:50 crc kubenswrapper[4914]: E0127 13:49:50.845410 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d3fdf2-9bd8-4eac-86ef-51d793f17e03" containerName="controller-manager" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.845430 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d3fdf2-9bd8-4eac-86ef-51d793f17e03" containerName="controller-manager" Jan 27 13:49:50 crc kubenswrapper[4914]: E0127 13:49:50.845447 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" containerName="installer" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.845455 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" containerName="installer" Jan 27 13:49:50 crc kubenswrapper[4914]: E0127 13:49:50.845467 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c6c52c-17b5-4a94-b18f-6ce669d8409d" containerName="route-controller-manager" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.845478 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c6c52c-17b5-4a94-b18f-6ce669d8409d" containerName="route-controller-manager" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.845598 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c2b004-9b1e-40e8-82dc-5b8361f8627e" containerName="installer" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.845613 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d3fdf2-9bd8-4eac-86ef-51d793f17e03" containerName="controller-manager" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.845622 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="40c6c52c-17b5-4a94-b18f-6ce669d8409d" containerName="route-controller-manager" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.846094 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.858489 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-config\") pod \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.858539 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-serving-cert\") pod \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.858574 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40c6c52c-17b5-4a94-b18f-6ce669d8409d-serving-cert\") pod \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\" (UID: \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\") " Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.858601 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ljt6\" (UniqueName: \"kubernetes.io/projected/40c6c52c-17b5-4a94-b18f-6ce669d8409d-kube-api-access-9ljt6\") pod \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\" (UID: \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\") " Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.858626 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40c6c52c-17b5-4a94-b18f-6ce669d8409d-client-ca\") pod \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\" (UID: \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\") " Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.858693 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-proxy-ca-bundles\") pod \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.858732 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvbs7\" (UniqueName: \"kubernetes.io/projected/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-kube-api-access-wvbs7\") pod \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.858774 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-client-ca\") pod \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\" (UID: \"51d3fdf2-9bd8-4eac-86ef-51d793f17e03\") " Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.858808 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40c6c52c-17b5-4a94-b18f-6ce669d8409d-config\") pod \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\" (UID: \"40c6c52c-17b5-4a94-b18f-6ce669d8409d\") " Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.860533 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40c6c52c-17b5-4a94-b18f-6ce669d8409d-client-ca" (OuterVolumeSpecName: "client-ca") pod "40c6c52c-17b5-4a94-b18f-6ce669d8409d" (UID: "40c6c52c-17b5-4a94-b18f-6ce669d8409d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.863506 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5946dcc6c8-j442s"] Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.864744 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-config" (OuterVolumeSpecName: "config") pod "51d3fdf2-9bd8-4eac-86ef-51d793f17e03" (UID: "51d3fdf2-9bd8-4eac-86ef-51d793f17e03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.865415 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40c6c52c-17b5-4a94-b18f-6ce669d8409d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "40c6c52c-17b5-4a94-b18f-6ce669d8409d" (UID: "40c6c52c-17b5-4a94-b18f-6ce669d8409d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.866013 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-client-ca" (OuterVolumeSpecName: "client-ca") pod "51d3fdf2-9bd8-4eac-86ef-51d793f17e03" (UID: "51d3fdf2-9bd8-4eac-86ef-51d793f17e03"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.866247 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "51d3fdf2-9bd8-4eac-86ef-51d793f17e03" (UID: "51d3fdf2-9bd8-4eac-86ef-51d793f17e03"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.867239 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "51d3fdf2-9bd8-4eac-86ef-51d793f17e03" (UID: "51d3fdf2-9bd8-4eac-86ef-51d793f17e03"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.867278 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40c6c52c-17b5-4a94-b18f-6ce669d8409d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.867304 4914 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40c6c52c-17b5-4a94-b18f-6ce669d8409d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.867314 4914 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.867323 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.867330 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.867811 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40c6c52c-17b5-4a94-b18f-6ce669d8409d-config" (OuterVolumeSpecName: "config") pod "40c6c52c-17b5-4a94-b18f-6ce669d8409d" (UID: "40c6c52c-17b5-4a94-b18f-6ce669d8409d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.869127 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-kube-api-access-wvbs7" (OuterVolumeSpecName: "kube-api-access-wvbs7") pod "51d3fdf2-9bd8-4eac-86ef-51d793f17e03" (UID: "51d3fdf2-9bd8-4eac-86ef-51d793f17e03"). InnerVolumeSpecName "kube-api-access-wvbs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.869204 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40c6c52c-17b5-4a94-b18f-6ce669d8409d-kube-api-access-9ljt6" (OuterVolumeSpecName: "kube-api-access-9ljt6") pod "40c6c52c-17b5-4a94-b18f-6ce669d8409d" (UID: "40c6c52c-17b5-4a94-b18f-6ce669d8409d"). InnerVolumeSpecName "kube-api-access-9ljt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.968636 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59ddba39-93c6-4524-b37c-09aed529ec74-serving-cert\") pod \"controller-manager-5946dcc6c8-j442s\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.968771 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-config\") pod \"controller-manager-5946dcc6c8-j442s\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.968887 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-client-ca\") pod \"controller-manager-5946dcc6c8-j442s\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.969048 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-proxy-ca-bundles\") pod \"controller-manager-5946dcc6c8-j442s\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.969099 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq5qc\" (UniqueName: \"kubernetes.io/projected/59ddba39-93c6-4524-b37c-09aed529ec74-kube-api-access-xq5qc\") pod \"controller-manager-5946dcc6c8-j442s\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.969228 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40c6c52c-17b5-4a94-b18f-6ce669d8409d-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.969253 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ljt6\" (UniqueName: \"kubernetes.io/projected/40c6c52c-17b5-4a94-b18f-6ce669d8409d-kube-api-access-9ljt6\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.969267 4914 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:50 crc kubenswrapper[4914]: I0127 13:49:50.969280 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvbs7\" (UniqueName: \"kubernetes.io/projected/51d3fdf2-9bd8-4eac-86ef-51d793f17e03-kube-api-access-wvbs7\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.070984 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59ddba39-93c6-4524-b37c-09aed529ec74-serving-cert\") pod \"controller-manager-5946dcc6c8-j442s\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.071106 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-config\") pod \"controller-manager-5946dcc6c8-j442s\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.071127 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-client-ca\") pod \"controller-manager-5946dcc6c8-j442s\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.071162 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-proxy-ca-bundles\") pod \"controller-manager-5946dcc6c8-j442s\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.071181 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq5qc\" (UniqueName: \"kubernetes.io/projected/59ddba39-93c6-4524-b37c-09aed529ec74-kube-api-access-xq5qc\") pod \"controller-manager-5946dcc6c8-j442s\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.072792 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-client-ca\") pod \"controller-manager-5946dcc6c8-j442s\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.073751 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-proxy-ca-bundles\") pod \"controller-manager-5946dcc6c8-j442s\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.074945 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-config\") pod \"controller-manager-5946dcc6c8-j442s\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.076110 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59ddba39-93c6-4524-b37c-09aed529ec74-serving-cert\") pod \"controller-manager-5946dcc6c8-j442s\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.089069 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq5qc\" (UniqueName: \"kubernetes.io/projected/59ddba39-93c6-4524-b37c-09aed529ec74-kube-api-access-xq5qc\") pod \"controller-manager-5946dcc6c8-j442s\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.154604 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.189245 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.251141 4914 generic.go:334] "Generic (PLEG): container finished" podID="51d3fdf2-9bd8-4eac-86ef-51d793f17e03" containerID="bef27b7e78605bfcf9b7199a842427dc3bfb1f5e9ae36995d34ee28992365e94" exitCode=0 Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.251467 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.251368 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" event={"ID":"51d3fdf2-9bd8-4eac-86ef-51d793f17e03","Type":"ContainerDied","Data":"bef27b7e78605bfcf9b7199a842427dc3bfb1f5e9ae36995d34ee28992365e94"} Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.251738 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7" event={"ID":"51d3fdf2-9bd8-4eac-86ef-51d793f17e03","Type":"ContainerDied","Data":"1703ff6367a3c31122ac72ba11cc8ff9210e6b5fbf12dc977075fcc00866425b"} Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.251876 4914 scope.go:117] "RemoveContainer" containerID="bef27b7e78605bfcf9b7199a842427dc3bfb1f5e9ae36995d34ee28992365e94" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.264483 4914 generic.go:334] "Generic (PLEG): container finished" podID="40c6c52c-17b5-4a94-b18f-6ce669d8409d" containerID="212ba8facc060da58b015b9d43487970530f24e472c0b124f2b408031716990f" exitCode=0 Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.264518 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" event={"ID":"40c6c52c-17b5-4a94-b18f-6ce669d8409d","Type":"ContainerDied","Data":"212ba8facc060da58b015b9d43487970530f24e472c0b124f2b408031716990f"} Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.264548 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" event={"ID":"40c6c52c-17b5-4a94-b18f-6ce669d8409d","Type":"ContainerDied","Data":"cb3fe6001f89d5d2cb54e31cc99dc6efb4ffbdd78052a4babb245fed2a9908d9"} Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.264610 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.281621 4914 scope.go:117] "RemoveContainer" containerID="bef27b7e78605bfcf9b7199a842427dc3bfb1f5e9ae36995d34ee28992365e94" Jan 27 13:49:51 crc kubenswrapper[4914]: E0127 13:49:51.288906 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bef27b7e78605bfcf9b7199a842427dc3bfb1f5e9ae36995d34ee28992365e94\": container with ID starting with bef27b7e78605bfcf9b7199a842427dc3bfb1f5e9ae36995d34ee28992365e94 not found: ID does not exist" containerID="bef27b7e78605bfcf9b7199a842427dc3bfb1f5e9ae36995d34ee28992365e94" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.288991 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef27b7e78605bfcf9b7199a842427dc3bfb1f5e9ae36995d34ee28992365e94"} err="failed to get container status \"bef27b7e78605bfcf9b7199a842427dc3bfb1f5e9ae36995d34ee28992365e94\": rpc error: code = NotFound desc = could not find container \"bef27b7e78605bfcf9b7199a842427dc3bfb1f5e9ae36995d34ee28992365e94\": container with ID starting with bef27b7e78605bfcf9b7199a842427dc3bfb1f5e9ae36995d34ee28992365e94 not found: ID does not exist" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.289025 4914 scope.go:117] "RemoveContainer" containerID="212ba8facc060da58b015b9d43487970530f24e472c0b124f2b408031716990f" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.291456 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7"] Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.294428 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bf8d7fdcc-f89r7"] Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.304138 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd"] Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.307567 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57496df798-4xbvd"] Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.315247 4914 scope.go:117] "RemoveContainer" containerID="212ba8facc060da58b015b9d43487970530f24e472c0b124f2b408031716990f" Jan 27 13:49:51 crc kubenswrapper[4914]: E0127 13:49:51.317763 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"212ba8facc060da58b015b9d43487970530f24e472c0b124f2b408031716990f\": container with ID starting with 212ba8facc060da58b015b9d43487970530f24e472c0b124f2b408031716990f not found: ID does not exist" containerID="212ba8facc060da58b015b9d43487970530f24e472c0b124f2b408031716990f" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.317815 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"212ba8facc060da58b015b9d43487970530f24e472c0b124f2b408031716990f"} err="failed to get container status \"212ba8facc060da58b015b9d43487970530f24e472c0b124f2b408031716990f\": rpc error: code = NotFound desc = could not find container \"212ba8facc060da58b015b9d43487970530f24e472c0b124f2b408031716990f\": container with ID starting with 212ba8facc060da58b015b9d43487970530f24e472c0b124f2b408031716990f not found: ID does not exist" Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.365518 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5946dcc6c8-j442s"] Jan 27 13:49:51 crc kubenswrapper[4914]: I0127 13:49:51.855447 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 13:49:52 crc kubenswrapper[4914]: I0127 13:49:52.272376 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" event={"ID":"59ddba39-93c6-4524-b37c-09aed529ec74","Type":"ContainerStarted","Data":"a5414dcd5f5d49f83febaec3e3ec16099e04a02cb1f4c374329c6474abb08819"} Jan 27 13:49:52 crc kubenswrapper[4914]: I0127 13:49:52.272416 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" event={"ID":"59ddba39-93c6-4524-b37c-09aed529ec74","Type":"ContainerStarted","Data":"2d8afb8961d22538032641348a3613d95dfaaf0ff09aeca2341cc702d5cd0f49"} Jan 27 13:49:52 crc kubenswrapper[4914]: I0127 13:49:52.272697 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:52 crc kubenswrapper[4914]: I0127 13:49:52.278096 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:52 crc kubenswrapper[4914]: I0127 13:49:52.301716 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40c6c52c-17b5-4a94-b18f-6ce669d8409d" path="/var/lib/kubelet/pods/40c6c52c-17b5-4a94-b18f-6ce669d8409d/volumes" Jan 27 13:49:52 crc kubenswrapper[4914]: I0127 13:49:52.302613 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51d3fdf2-9bd8-4eac-86ef-51d793f17e03" path="/var/lib/kubelet/pods/51d3fdf2-9bd8-4eac-86ef-51d793f17e03/volumes" Jan 27 13:49:52 crc kubenswrapper[4914]: I0127 13:49:52.303280 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1b4a22a-26ec-4e2f-9a83-d0532ff4905d" path="/var/lib/kubelet/pods/f1b4a22a-26ec-4e2f-9a83-d0532ff4905d/volumes" Jan 27 13:49:52 crc kubenswrapper[4914]: I0127 13:49:52.315227 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" podStartSLOduration=15.315189955 podStartE2EDuration="15.315189955s" podCreationTimestamp="2026-01-27 13:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:49:52.292820346 +0000 UTC m=+350.605170441" watchObservedRunningTime="2026-01-27 13:49:52.315189955 +0000 UTC m=+350.627540040" Jan 27 13:49:52 crc kubenswrapper[4914]: I0127 13:49:52.330225 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.026516 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.186314 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp"] Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.187237 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.190680 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.190819 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.191151 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.191235 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.191370 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.192952 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.205648 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp"] Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.298516 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c03fd4d7-90e8-40ec-9169-811b36b76904-client-ca\") pod \"route-controller-manager-f55dd996b-tjxzp\" (UID: \"c03fd4d7-90e8-40ec-9169-811b36b76904\") " pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.298588 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c03fd4d7-90e8-40ec-9169-811b36b76904-serving-cert\") pod \"route-controller-manager-f55dd996b-tjxzp\" (UID: \"c03fd4d7-90e8-40ec-9169-811b36b76904\") " pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.299207 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c03fd4d7-90e8-40ec-9169-811b36b76904-config\") pod \"route-controller-manager-f55dd996b-tjxzp\" (UID: \"c03fd4d7-90e8-40ec-9169-811b36b76904\") " pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.299328 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmwn8\" (UniqueName: \"kubernetes.io/projected/c03fd4d7-90e8-40ec-9169-811b36b76904-kube-api-access-gmwn8\") pod \"route-controller-manager-f55dd996b-tjxzp\" (UID: \"c03fd4d7-90e8-40ec-9169-811b36b76904\") " pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.376346 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.400648 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmwn8\" (UniqueName: \"kubernetes.io/projected/c03fd4d7-90e8-40ec-9169-811b36b76904-kube-api-access-gmwn8\") pod \"route-controller-manager-f55dd996b-tjxzp\" (UID: \"c03fd4d7-90e8-40ec-9169-811b36b76904\") " pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.400786 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c03fd4d7-90e8-40ec-9169-811b36b76904-client-ca\") pod \"route-controller-manager-f55dd996b-tjxzp\" (UID: \"c03fd4d7-90e8-40ec-9169-811b36b76904\") " pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.400850 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c03fd4d7-90e8-40ec-9169-811b36b76904-serving-cert\") pod \"route-controller-manager-f55dd996b-tjxzp\" (UID: \"c03fd4d7-90e8-40ec-9169-811b36b76904\") " pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.400878 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c03fd4d7-90e8-40ec-9169-811b36b76904-config\") pod \"route-controller-manager-f55dd996b-tjxzp\" (UID: \"c03fd4d7-90e8-40ec-9169-811b36b76904\") " pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.402285 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c03fd4d7-90e8-40ec-9169-811b36b76904-client-ca\") pod \"route-controller-manager-f55dd996b-tjxzp\" (UID: \"c03fd4d7-90e8-40ec-9169-811b36b76904\") " pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.402412 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c03fd4d7-90e8-40ec-9169-811b36b76904-config\") pod \"route-controller-manager-f55dd996b-tjxzp\" (UID: \"c03fd4d7-90e8-40ec-9169-811b36b76904\") " pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.407550 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c03fd4d7-90e8-40ec-9169-811b36b76904-serving-cert\") pod \"route-controller-manager-f55dd996b-tjxzp\" (UID: \"c03fd4d7-90e8-40ec-9169-811b36b76904\") " pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.418382 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmwn8\" (UniqueName: \"kubernetes.io/projected/c03fd4d7-90e8-40ec-9169-811b36b76904-kube-api-access-gmwn8\") pod \"route-controller-manager-f55dd996b-tjxzp\" (UID: \"c03fd4d7-90e8-40ec-9169-811b36b76904\") " pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.515312 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:53 crc kubenswrapper[4914]: I0127 13:49:53.909156 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp"] Jan 27 13:49:54 crc kubenswrapper[4914]: I0127 13:49:54.285476 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" event={"ID":"c03fd4d7-90e8-40ec-9169-811b36b76904","Type":"ContainerStarted","Data":"ab1786ccad5da1544d802fc9238b8587836f0f16c38d3cb3fd31b1f9fcd86b2d"} Jan 27 13:49:54 crc kubenswrapper[4914]: I0127 13:49:54.285558 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" event={"ID":"c03fd4d7-90e8-40ec-9169-811b36b76904","Type":"ContainerStarted","Data":"74157a9d1c07768fe23a1b61e7013abf6dccc8872a0576765cab89e054c000f0"} Jan 27 13:49:54 crc kubenswrapper[4914]: I0127 13:49:54.309982 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" podStartSLOduration=17.309964344 podStartE2EDuration="17.309964344s" podCreationTimestamp="2026-01-27 13:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:49:54.308683208 +0000 UTC m=+352.621033303" watchObservedRunningTime="2026-01-27 13:49:54.309964344 +0000 UTC m=+352.622314429" Jan 27 13:49:55 crc kubenswrapper[4914]: I0127 13:49:55.289917 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:55 crc kubenswrapper[4914]: I0127 13:49:55.296744 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:55 crc kubenswrapper[4914]: I0127 13:49:55.918870 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 13:49:56 crc kubenswrapper[4914]: I0127 13:49:56.476911 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 13:49:56 crc kubenswrapper[4914]: I0127 13:49:56.581257 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.227784 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.288355 4914 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.288589 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a2f90e9e8f1a2ab51054e59eff137619b40ac960a84688741267390b724b051b" gracePeriod=5 Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.402244 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5946dcc6c8-j442s"] Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.402722 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" podUID="59ddba39-93c6-4524-b37c-09aed529ec74" containerName="controller-manager" containerID="cri-o://a5414dcd5f5d49f83febaec3e3ec16099e04a02cb1f4c374329c6474abb08819" gracePeriod=30 Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.497737 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp"] Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.894803 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.966186 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq5qc\" (UniqueName: \"kubernetes.io/projected/59ddba39-93c6-4524-b37c-09aed529ec74-kube-api-access-xq5qc\") pod \"59ddba39-93c6-4524-b37c-09aed529ec74\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.966303 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-client-ca\") pod \"59ddba39-93c6-4524-b37c-09aed529ec74\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.966334 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-proxy-ca-bundles\") pod \"59ddba39-93c6-4524-b37c-09aed529ec74\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.966392 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59ddba39-93c6-4524-b37c-09aed529ec74-serving-cert\") pod \"59ddba39-93c6-4524-b37c-09aed529ec74\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.966488 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-config\") pod \"59ddba39-93c6-4524-b37c-09aed529ec74\" (UID: \"59ddba39-93c6-4524-b37c-09aed529ec74\") " Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.967432 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "59ddba39-93c6-4524-b37c-09aed529ec74" (UID: "59ddba39-93c6-4524-b37c-09aed529ec74"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.967451 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-client-ca" (OuterVolumeSpecName: "client-ca") pod "59ddba39-93c6-4524-b37c-09aed529ec74" (UID: "59ddba39-93c6-4524-b37c-09aed529ec74"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.968067 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-config" (OuterVolumeSpecName: "config") pod "59ddba39-93c6-4524-b37c-09aed529ec74" (UID: "59ddba39-93c6-4524-b37c-09aed529ec74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.971561 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59ddba39-93c6-4524-b37c-09aed529ec74-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "59ddba39-93c6-4524-b37c-09aed529ec74" (UID: "59ddba39-93c6-4524-b37c-09aed529ec74"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:49:57 crc kubenswrapper[4914]: I0127 13:49:57.973149 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59ddba39-93c6-4524-b37c-09aed529ec74-kube-api-access-xq5qc" (OuterVolumeSpecName: "kube-api-access-xq5qc") pod "59ddba39-93c6-4524-b37c-09aed529ec74" (UID: "59ddba39-93c6-4524-b37c-09aed529ec74"). InnerVolumeSpecName "kube-api-access-xq5qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.067997 4914 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.068065 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59ddba39-93c6-4524-b37c-09aed529ec74-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.068078 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.068089 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq5qc\" (UniqueName: \"kubernetes.io/projected/59ddba39-93c6-4524-b37c-09aed529ec74-kube-api-access-xq5qc\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.068101 4914 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59ddba39-93c6-4524-b37c-09aed529ec74-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.304525 4914 generic.go:334] "Generic (PLEG): container finished" podID="59ddba39-93c6-4524-b37c-09aed529ec74" containerID="a5414dcd5f5d49f83febaec3e3ec16099e04a02cb1f4c374329c6474abb08819" exitCode=0 Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.304720 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" podUID="c03fd4d7-90e8-40ec-9169-811b36b76904" containerName="route-controller-manager" containerID="cri-o://ab1786ccad5da1544d802fc9238b8587836f0f16c38d3cb3fd31b1f9fcd86b2d" gracePeriod=30 Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.305028 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.306923 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" event={"ID":"59ddba39-93c6-4524-b37c-09aed529ec74","Type":"ContainerDied","Data":"a5414dcd5f5d49f83febaec3e3ec16099e04a02cb1f4c374329c6474abb08819"} Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.306960 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5946dcc6c8-j442s" event={"ID":"59ddba39-93c6-4524-b37c-09aed529ec74","Type":"ContainerDied","Data":"2d8afb8961d22538032641348a3613d95dfaaf0ff09aeca2341cc702d5cd0f49"} Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.306976 4914 scope.go:117] "RemoveContainer" containerID="a5414dcd5f5d49f83febaec3e3ec16099e04a02cb1f4c374329c6474abb08819" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.329607 4914 scope.go:117] "RemoveContainer" containerID="a5414dcd5f5d49f83febaec3e3ec16099e04a02cb1f4c374329c6474abb08819" Jan 27 13:49:58 crc kubenswrapper[4914]: E0127 13:49:58.330160 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5414dcd5f5d49f83febaec3e3ec16099e04a02cb1f4c374329c6474abb08819\": container with ID starting with a5414dcd5f5d49f83febaec3e3ec16099e04a02cb1f4c374329c6474abb08819 not found: ID does not exist" containerID="a5414dcd5f5d49f83febaec3e3ec16099e04a02cb1f4c374329c6474abb08819" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.330200 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5414dcd5f5d49f83febaec3e3ec16099e04a02cb1f4c374329c6474abb08819"} err="failed to get container status \"a5414dcd5f5d49f83febaec3e3ec16099e04a02cb1f4c374329c6474abb08819\": rpc error: code = NotFound desc = could not find container \"a5414dcd5f5d49f83febaec3e3ec16099e04a02cb1f4c374329c6474abb08819\": container with ID starting with a5414dcd5f5d49f83febaec3e3ec16099e04a02cb1f4c374329c6474abb08819 not found: ID does not exist" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.336873 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5946dcc6c8-j442s"] Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.340199 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5946dcc6c8-j442s"] Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.666537 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.776596 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c03fd4d7-90e8-40ec-9169-811b36b76904-config\") pod \"c03fd4d7-90e8-40ec-9169-811b36b76904\" (UID: \"c03fd4d7-90e8-40ec-9169-811b36b76904\") " Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.776725 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c03fd4d7-90e8-40ec-9169-811b36b76904-client-ca\") pod \"c03fd4d7-90e8-40ec-9169-811b36b76904\" (UID: \"c03fd4d7-90e8-40ec-9169-811b36b76904\") " Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.776753 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmwn8\" (UniqueName: \"kubernetes.io/projected/c03fd4d7-90e8-40ec-9169-811b36b76904-kube-api-access-gmwn8\") pod \"c03fd4d7-90e8-40ec-9169-811b36b76904\" (UID: \"c03fd4d7-90e8-40ec-9169-811b36b76904\") " Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.776781 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c03fd4d7-90e8-40ec-9169-811b36b76904-serving-cert\") pod \"c03fd4d7-90e8-40ec-9169-811b36b76904\" (UID: \"c03fd4d7-90e8-40ec-9169-811b36b76904\") " Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.777493 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03fd4d7-90e8-40ec-9169-811b36b76904-config" (OuterVolumeSpecName: "config") pod "c03fd4d7-90e8-40ec-9169-811b36b76904" (UID: "c03fd4d7-90e8-40ec-9169-811b36b76904"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.778086 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03fd4d7-90e8-40ec-9169-811b36b76904-client-ca" (OuterVolumeSpecName: "client-ca") pod "c03fd4d7-90e8-40ec-9169-811b36b76904" (UID: "c03fd4d7-90e8-40ec-9169-811b36b76904"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.779728 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03fd4d7-90e8-40ec-9169-811b36b76904-kube-api-access-gmwn8" (OuterVolumeSpecName: "kube-api-access-gmwn8") pod "c03fd4d7-90e8-40ec-9169-811b36b76904" (UID: "c03fd4d7-90e8-40ec-9169-811b36b76904"). InnerVolumeSpecName "kube-api-access-gmwn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.786131 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03fd4d7-90e8-40ec-9169-811b36b76904-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c03fd4d7-90e8-40ec-9169-811b36b76904" (UID: "c03fd4d7-90e8-40ec-9169-811b36b76904"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.878228 4914 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c03fd4d7-90e8-40ec-9169-811b36b76904-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.878263 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmwn8\" (UniqueName: \"kubernetes.io/projected/c03fd4d7-90e8-40ec-9169-811b36b76904-kube-api-access-gmwn8\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.878275 4914 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c03fd4d7-90e8-40ec-9169-811b36b76904-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:58 crc kubenswrapper[4914]: I0127 13:49:58.878287 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c03fd4d7-90e8-40ec-9169-811b36b76904-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.035921 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.187275 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q"] Jan 27 13:49:59 crc kubenswrapper[4914]: E0127 13:49:59.187521 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c03fd4d7-90e8-40ec-9169-811b36b76904" containerName="route-controller-manager" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.187532 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="c03fd4d7-90e8-40ec-9169-811b36b76904" containerName="route-controller-manager" Jan 27 13:49:59 crc kubenswrapper[4914]: E0127 13:49:59.187554 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.187560 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 13:49:59 crc kubenswrapper[4914]: E0127 13:49:59.187567 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59ddba39-93c6-4524-b37c-09aed529ec74" containerName="controller-manager" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.187573 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="59ddba39-93c6-4524-b37c-09aed529ec74" containerName="controller-manager" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.187659 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="59ddba39-93c6-4524-b37c-09aed529ec74" containerName="controller-manager" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.187675 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="c03fd4d7-90e8-40ec-9169-811b36b76904" containerName="route-controller-manager" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.187688 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.188202 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.190612 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.190743 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.190906 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.191254 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.191884 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5"] Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.191951 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.192418 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.192703 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.197640 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.206769 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q"] Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.211667 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5"] Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.283838 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/542b62a8-0b67-4aeb-a2ec-c3d88bed3443-client-ca\") pod \"controller-manager-5bb65c79b9-pnd8q\" (UID: \"542b62a8-0b67-4aeb-a2ec-c3d88bed3443\") " pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.283900 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/542b62a8-0b67-4aeb-a2ec-c3d88bed3443-proxy-ca-bundles\") pod \"controller-manager-5bb65c79b9-pnd8q\" (UID: \"542b62a8-0b67-4aeb-a2ec-c3d88bed3443\") " pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.283943 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjszk\" (UniqueName: \"kubernetes.io/projected/934f8714-e8a2-487e-af93-4f52c49c8924-kube-api-access-bjszk\") pod \"route-controller-manager-78cd5f8d7d-jshn5\" (UID: \"934f8714-e8a2-487e-af93-4f52c49c8924\") " pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.283994 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/934f8714-e8a2-487e-af93-4f52c49c8924-serving-cert\") pod \"route-controller-manager-78cd5f8d7d-jshn5\" (UID: \"934f8714-e8a2-487e-af93-4f52c49c8924\") " pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.284020 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/542b62a8-0b67-4aeb-a2ec-c3d88bed3443-config\") pod \"controller-manager-5bb65c79b9-pnd8q\" (UID: \"542b62a8-0b67-4aeb-a2ec-c3d88bed3443\") " pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.284040 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flqrz\" (UniqueName: \"kubernetes.io/projected/542b62a8-0b67-4aeb-a2ec-c3d88bed3443-kube-api-access-flqrz\") pod \"controller-manager-5bb65c79b9-pnd8q\" (UID: \"542b62a8-0b67-4aeb-a2ec-c3d88bed3443\") " pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.284294 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/542b62a8-0b67-4aeb-a2ec-c3d88bed3443-serving-cert\") pod \"controller-manager-5bb65c79b9-pnd8q\" (UID: \"542b62a8-0b67-4aeb-a2ec-c3d88bed3443\") " pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.284541 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/934f8714-e8a2-487e-af93-4f52c49c8924-config\") pod \"route-controller-manager-78cd5f8d7d-jshn5\" (UID: \"934f8714-e8a2-487e-af93-4f52c49c8924\") " pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.284588 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/934f8714-e8a2-487e-af93-4f52c49c8924-client-ca\") pod \"route-controller-manager-78cd5f8d7d-jshn5\" (UID: \"934f8714-e8a2-487e-af93-4f52c49c8924\") " pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.310469 4914 generic.go:334] "Generic (PLEG): container finished" podID="c03fd4d7-90e8-40ec-9169-811b36b76904" containerID="ab1786ccad5da1544d802fc9238b8587836f0f16c38d3cb3fd31b1f9fcd86b2d" exitCode=0 Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.310518 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" event={"ID":"c03fd4d7-90e8-40ec-9169-811b36b76904","Type":"ContainerDied","Data":"ab1786ccad5da1544d802fc9238b8587836f0f16c38d3cb3fd31b1f9fcd86b2d"} Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.310560 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.310597 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp" event={"ID":"c03fd4d7-90e8-40ec-9169-811b36b76904","Type":"ContainerDied","Data":"74157a9d1c07768fe23a1b61e7013abf6dccc8872a0576765cab89e054c000f0"} Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.310626 4914 scope.go:117] "RemoveContainer" containerID="ab1786ccad5da1544d802fc9238b8587836f0f16c38d3cb3fd31b1f9fcd86b2d" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.335098 4914 scope.go:117] "RemoveContainer" containerID="ab1786ccad5da1544d802fc9238b8587836f0f16c38d3cb3fd31b1f9fcd86b2d" Jan 27 13:49:59 crc kubenswrapper[4914]: E0127 13:49:59.335590 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab1786ccad5da1544d802fc9238b8587836f0f16c38d3cb3fd31b1f9fcd86b2d\": container with ID starting with ab1786ccad5da1544d802fc9238b8587836f0f16c38d3cb3fd31b1f9fcd86b2d not found: ID does not exist" containerID="ab1786ccad5da1544d802fc9238b8587836f0f16c38d3cb3fd31b1f9fcd86b2d" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.335629 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab1786ccad5da1544d802fc9238b8587836f0f16c38d3cb3fd31b1f9fcd86b2d"} err="failed to get container status \"ab1786ccad5da1544d802fc9238b8587836f0f16c38d3cb3fd31b1f9fcd86b2d\": rpc error: code = NotFound desc = could not find container \"ab1786ccad5da1544d802fc9238b8587836f0f16c38d3cb3fd31b1f9fcd86b2d\": container with ID starting with ab1786ccad5da1544d802fc9238b8587836f0f16c38d3cb3fd31b1f9fcd86b2d not found: ID does not exist" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.346774 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp"] Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.355397 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f55dd996b-tjxzp"] Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.386069 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/934f8714-e8a2-487e-af93-4f52c49c8924-config\") pod \"route-controller-manager-78cd5f8d7d-jshn5\" (UID: \"934f8714-e8a2-487e-af93-4f52c49c8924\") " pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.386129 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/934f8714-e8a2-487e-af93-4f52c49c8924-client-ca\") pod \"route-controller-manager-78cd5f8d7d-jshn5\" (UID: \"934f8714-e8a2-487e-af93-4f52c49c8924\") " pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.386170 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/542b62a8-0b67-4aeb-a2ec-c3d88bed3443-client-ca\") pod \"controller-manager-5bb65c79b9-pnd8q\" (UID: \"542b62a8-0b67-4aeb-a2ec-c3d88bed3443\") " pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.386198 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/542b62a8-0b67-4aeb-a2ec-c3d88bed3443-proxy-ca-bundles\") pod \"controller-manager-5bb65c79b9-pnd8q\" (UID: \"542b62a8-0b67-4aeb-a2ec-c3d88bed3443\") " pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.386235 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjszk\" (UniqueName: \"kubernetes.io/projected/934f8714-e8a2-487e-af93-4f52c49c8924-kube-api-access-bjszk\") pod \"route-controller-manager-78cd5f8d7d-jshn5\" (UID: \"934f8714-e8a2-487e-af93-4f52c49c8924\") " pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.386295 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/934f8714-e8a2-487e-af93-4f52c49c8924-serving-cert\") pod \"route-controller-manager-78cd5f8d7d-jshn5\" (UID: \"934f8714-e8a2-487e-af93-4f52c49c8924\") " pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.386322 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/542b62a8-0b67-4aeb-a2ec-c3d88bed3443-config\") pod \"controller-manager-5bb65c79b9-pnd8q\" (UID: \"542b62a8-0b67-4aeb-a2ec-c3d88bed3443\") " pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.386343 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flqrz\" (UniqueName: \"kubernetes.io/projected/542b62a8-0b67-4aeb-a2ec-c3d88bed3443-kube-api-access-flqrz\") pod \"controller-manager-5bb65c79b9-pnd8q\" (UID: \"542b62a8-0b67-4aeb-a2ec-c3d88bed3443\") " pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.386371 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/542b62a8-0b67-4aeb-a2ec-c3d88bed3443-serving-cert\") pod \"controller-manager-5bb65c79b9-pnd8q\" (UID: \"542b62a8-0b67-4aeb-a2ec-c3d88bed3443\") " pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.387762 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/542b62a8-0b67-4aeb-a2ec-c3d88bed3443-proxy-ca-bundles\") pod \"controller-manager-5bb65c79b9-pnd8q\" (UID: \"542b62a8-0b67-4aeb-a2ec-c3d88bed3443\") " pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.387777 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/934f8714-e8a2-487e-af93-4f52c49c8924-config\") pod \"route-controller-manager-78cd5f8d7d-jshn5\" (UID: \"934f8714-e8a2-487e-af93-4f52c49c8924\") " pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.387893 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/934f8714-e8a2-487e-af93-4f52c49c8924-client-ca\") pod \"route-controller-manager-78cd5f8d7d-jshn5\" (UID: \"934f8714-e8a2-487e-af93-4f52c49c8924\") " pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.388011 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/542b62a8-0b67-4aeb-a2ec-c3d88bed3443-client-ca\") pod \"controller-manager-5bb65c79b9-pnd8q\" (UID: \"542b62a8-0b67-4aeb-a2ec-c3d88bed3443\") " pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.388136 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/542b62a8-0b67-4aeb-a2ec-c3d88bed3443-config\") pod \"controller-manager-5bb65c79b9-pnd8q\" (UID: \"542b62a8-0b67-4aeb-a2ec-c3d88bed3443\") " pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.394627 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/542b62a8-0b67-4aeb-a2ec-c3d88bed3443-serving-cert\") pod \"controller-manager-5bb65c79b9-pnd8q\" (UID: \"542b62a8-0b67-4aeb-a2ec-c3d88bed3443\") " pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.395266 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/934f8714-e8a2-487e-af93-4f52c49c8924-serving-cert\") pod \"route-controller-manager-78cd5f8d7d-jshn5\" (UID: \"934f8714-e8a2-487e-af93-4f52c49c8924\") " pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.403418 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flqrz\" (UniqueName: \"kubernetes.io/projected/542b62a8-0b67-4aeb-a2ec-c3d88bed3443-kube-api-access-flqrz\") pod \"controller-manager-5bb65c79b9-pnd8q\" (UID: \"542b62a8-0b67-4aeb-a2ec-c3d88bed3443\") " pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.406750 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjszk\" (UniqueName: \"kubernetes.io/projected/934f8714-e8a2-487e-af93-4f52c49c8924-kube-api-access-bjszk\") pod \"route-controller-manager-78cd5f8d7d-jshn5\" (UID: \"934f8714-e8a2-487e-af93-4f52c49c8924\") " pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.469458 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.507040 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.517362 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.926134 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q"] Jan 27 13:49:59 crc kubenswrapper[4914]: I0127 13:49:59.981161 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5"] Jan 27 13:49:59 crc kubenswrapper[4914]: W0127 13:49:59.995181 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod934f8714_e8a2_487e_af93_4f52c49c8924.slice/crio-07a8719e7b946eb2ac14806944594a4decfbdc0ef292a74da38cab25d09bd1bd WatchSource:0}: Error finding container 07a8719e7b946eb2ac14806944594a4decfbdc0ef292a74da38cab25d09bd1bd: Status 404 returned error can't find the container with id 07a8719e7b946eb2ac14806944594a4decfbdc0ef292a74da38cab25d09bd1bd Jan 27 13:50:00 crc kubenswrapper[4914]: I0127 13:50:00.300541 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59ddba39-93c6-4524-b37c-09aed529ec74" path="/var/lib/kubelet/pods/59ddba39-93c6-4524-b37c-09aed529ec74/volumes" Jan 27 13:50:00 crc kubenswrapper[4914]: I0127 13:50:00.301357 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03fd4d7-90e8-40ec-9169-811b36b76904" path="/var/lib/kubelet/pods/c03fd4d7-90e8-40ec-9169-811b36b76904/volumes" Jan 27 13:50:00 crc kubenswrapper[4914]: I0127 13:50:00.320873 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" event={"ID":"542b62a8-0b67-4aeb-a2ec-c3d88bed3443","Type":"ContainerStarted","Data":"9e4c4102776cf4ccc505e6b6c402967a0f63d5948ff4e850bdd6875e0728f2b9"} Jan 27 13:50:00 crc kubenswrapper[4914]: I0127 13:50:00.320979 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" event={"ID":"542b62a8-0b67-4aeb-a2ec-c3d88bed3443","Type":"ContainerStarted","Data":"cbd162fdce2f63c8cfaf044d72017ea92398384aab01308625dc548b279917b2"} Jan 27 13:50:00 crc kubenswrapper[4914]: I0127 13:50:00.321284 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:50:00 crc kubenswrapper[4914]: I0127 13:50:00.322580 4914 patch_prober.go:28] interesting pod/controller-manager-5bb65c79b9-pnd8q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 27 13:50:00 crc kubenswrapper[4914]: I0127 13:50:00.322637 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" podUID="542b62a8-0b67-4aeb-a2ec-c3d88bed3443" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 27 13:50:00 crc kubenswrapper[4914]: I0127 13:50:00.323971 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" event={"ID":"934f8714-e8a2-487e-af93-4f52c49c8924","Type":"ContainerStarted","Data":"623db9a40dd0941dbe7e1de7a88f9bc4412e506a01a910d58ddb3d6801622038"} Jan 27 13:50:00 crc kubenswrapper[4914]: I0127 13:50:00.324009 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" event={"ID":"934f8714-e8a2-487e-af93-4f52c49c8924","Type":"ContainerStarted","Data":"07a8719e7b946eb2ac14806944594a4decfbdc0ef292a74da38cab25d09bd1bd"} Jan 27 13:50:00 crc kubenswrapper[4914]: I0127 13:50:00.324213 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" Jan 27 13:50:00 crc kubenswrapper[4914]: I0127 13:50:00.325257 4914 patch_prober.go:28] interesting pod/route-controller-manager-78cd5f8d7d-jshn5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Jan 27 13:50:00 crc kubenswrapper[4914]: I0127 13:50:00.325299 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" podUID="934f8714-e8a2-487e-af93-4f52c49c8924" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Jan 27 13:50:00 crc kubenswrapper[4914]: I0127 13:50:00.343434 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" podStartSLOduration=3.343415963 podStartE2EDuration="3.343415963s" podCreationTimestamp="2026-01-27 13:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:50:00.339736634 +0000 UTC m=+358.652086719" watchObservedRunningTime="2026-01-27 13:50:00.343415963 +0000 UTC m=+358.655766048" Jan 27 13:50:01 crc kubenswrapper[4914]: I0127 13:50:01.332114 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" Jan 27 13:50:01 crc kubenswrapper[4914]: I0127 13:50:01.332754 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bb65c79b9-pnd8q" Jan 27 13:50:01 crc kubenswrapper[4914]: I0127 13:50:01.350219 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-78cd5f8d7d-jshn5" podStartSLOduration=4.350202434 podStartE2EDuration="4.350202434s" podCreationTimestamp="2026-01-27 13:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:50:00.360132358 +0000 UTC m=+358.672482443" watchObservedRunningTime="2026-01-27 13:50:01.350202434 +0000 UTC m=+359.662552519" Jan 27 13:50:01 crc kubenswrapper[4914]: I0127 13:50:01.444550 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.277806 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.278149 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.356614 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.356670 4914 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a2f90e9e8f1a2ab51054e59eff137619b40ac960a84688741267390b724b051b" exitCode=137 Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.356716 4914 scope.go:117] "RemoveContainer" containerID="a2f90e9e8f1a2ab51054e59eff137619b40ac960a84688741267390b724b051b" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.356759 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.372583 4914 scope.go:117] "RemoveContainer" containerID="a2f90e9e8f1a2ab51054e59eff137619b40ac960a84688741267390b724b051b" Jan 27 13:50:03 crc kubenswrapper[4914]: E0127 13:50:03.373074 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2f90e9e8f1a2ab51054e59eff137619b40ac960a84688741267390b724b051b\": container with ID starting with a2f90e9e8f1a2ab51054e59eff137619b40ac960a84688741267390b724b051b not found: ID does not exist" containerID="a2f90e9e8f1a2ab51054e59eff137619b40ac960a84688741267390b724b051b" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.373116 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f90e9e8f1a2ab51054e59eff137619b40ac960a84688741267390b724b051b"} err="failed to get container status \"a2f90e9e8f1a2ab51054e59eff137619b40ac960a84688741267390b724b051b\": rpc error: code = NotFound desc = could not find container \"a2f90e9e8f1a2ab51054e59eff137619b40ac960a84688741267390b724b051b\": container with ID starting with a2f90e9e8f1a2ab51054e59eff137619b40ac960a84688741267390b724b051b not found: ID does not exist" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.431131 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.431173 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.431199 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.431235 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.431257 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.431268 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.431328 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.431405 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.431440 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.431702 4914 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.431714 4914 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.431724 4914 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.431733 4914 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.451242 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:50:03 crc kubenswrapper[4914]: I0127 13:50:03.533269 4914 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:04 crc kubenswrapper[4914]: I0127 13:50:04.300765 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 13:50:04 crc kubenswrapper[4914]: I0127 13:50:04.301342 4914 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 27 13:50:04 crc kubenswrapper[4914]: I0127 13:50:04.309918 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 13:50:04 crc kubenswrapper[4914]: I0127 13:50:04.309965 4914 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1bba3165-cb67-477e-a6a5-c67bd7d0298b" Jan 27 13:50:04 crc kubenswrapper[4914]: I0127 13:50:04.313023 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 13:50:04 crc kubenswrapper[4914]: I0127 13:50:04.313073 4914 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1bba3165-cb67-477e-a6a5-c67bd7d0298b" Jan 27 13:50:07 crc kubenswrapper[4914]: I0127 13:50:07.691333 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:50:07 crc kubenswrapper[4914]: I0127 13:50:07.691670 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:50:20 crc kubenswrapper[4914]: I0127 13:50:20.117081 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8q7ck"] Jan 27 13:50:20 crc kubenswrapper[4914]: I0127 13:50:20.118379 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8q7ck" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" containerName="registry-server" containerID="cri-o://9410434da2d572ca598a13ba1d0a5bc1bf4b83fc8e566efd1ec6e46441f2282d" gracePeriod=2 Jan 27 13:50:20 crc kubenswrapper[4914]: I0127 13:50:20.452901 4914 generic.go:334] "Generic (PLEG): container finished" podID="1f2a8a9b-5334-4de2-9198-7677a52f8002" containerID="9410434da2d572ca598a13ba1d0a5bc1bf4b83fc8e566efd1ec6e46441f2282d" exitCode=0 Jan 27 13:50:20 crc kubenswrapper[4914]: I0127 13:50:20.452967 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8q7ck" event={"ID":"1f2a8a9b-5334-4de2-9198-7677a52f8002","Type":"ContainerDied","Data":"9410434da2d572ca598a13ba1d0a5bc1bf4b83fc8e566efd1ec6e46441f2282d"} Jan 27 13:50:20 crc kubenswrapper[4914]: I0127 13:50:20.838690 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:50:20 crc kubenswrapper[4914]: I0127 13:50:20.871579 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qbl7\" (UniqueName: \"kubernetes.io/projected/1f2a8a9b-5334-4de2-9198-7677a52f8002-kube-api-access-7qbl7\") pod \"1f2a8a9b-5334-4de2-9198-7677a52f8002\" (UID: \"1f2a8a9b-5334-4de2-9198-7677a52f8002\") " Jan 27 13:50:20 crc kubenswrapper[4914]: I0127 13:50:20.871744 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2a8a9b-5334-4de2-9198-7677a52f8002-catalog-content\") pod \"1f2a8a9b-5334-4de2-9198-7677a52f8002\" (UID: \"1f2a8a9b-5334-4de2-9198-7677a52f8002\") " Jan 27 13:50:20 crc kubenswrapper[4914]: I0127 13:50:20.871794 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2a8a9b-5334-4de2-9198-7677a52f8002-utilities\") pod \"1f2a8a9b-5334-4de2-9198-7677a52f8002\" (UID: \"1f2a8a9b-5334-4de2-9198-7677a52f8002\") " Jan 27 13:50:20 crc kubenswrapper[4914]: I0127 13:50:20.873969 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f2a8a9b-5334-4de2-9198-7677a52f8002-utilities" (OuterVolumeSpecName: "utilities") pod "1f2a8a9b-5334-4de2-9198-7677a52f8002" (UID: "1f2a8a9b-5334-4de2-9198-7677a52f8002"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:50:20 crc kubenswrapper[4914]: I0127 13:50:20.879668 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2a8a9b-5334-4de2-9198-7677a52f8002-kube-api-access-7qbl7" (OuterVolumeSpecName: "kube-api-access-7qbl7") pod "1f2a8a9b-5334-4de2-9198-7677a52f8002" (UID: "1f2a8a9b-5334-4de2-9198-7677a52f8002"). InnerVolumeSpecName "kube-api-access-7qbl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:50:20 crc kubenswrapper[4914]: I0127 13:50:20.930720 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f2a8a9b-5334-4de2-9198-7677a52f8002-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f2a8a9b-5334-4de2-9198-7677a52f8002" (UID: "1f2a8a9b-5334-4de2-9198-7677a52f8002"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:50:20 crc kubenswrapper[4914]: I0127 13:50:20.973929 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2a8a9b-5334-4de2-9198-7677a52f8002-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:20 crc kubenswrapper[4914]: I0127 13:50:20.973962 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2a8a9b-5334-4de2-9198-7677a52f8002-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:20 crc kubenswrapper[4914]: I0127 13:50:20.973972 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qbl7\" (UniqueName: \"kubernetes.io/projected/1f2a8a9b-5334-4de2-9198-7677a52f8002-kube-api-access-7qbl7\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:21 crc kubenswrapper[4914]: I0127 13:50:21.459490 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8q7ck" event={"ID":"1f2a8a9b-5334-4de2-9198-7677a52f8002","Type":"ContainerDied","Data":"1a6f82b4a8ae6d0fa3568f02c0861f4cd98891d44a15681759989e4b4661a1d7"} Jan 27 13:50:21 crc kubenswrapper[4914]: I0127 13:50:21.459547 4914 scope.go:117] "RemoveContainer" containerID="9410434da2d572ca598a13ba1d0a5bc1bf4b83fc8e566efd1ec6e46441f2282d" Jan 27 13:50:21 crc kubenswrapper[4914]: I0127 13:50:21.460915 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8q7ck" Jan 27 13:50:21 crc kubenswrapper[4914]: I0127 13:50:21.472904 4914 scope.go:117] "RemoveContainer" containerID="47d49d21deff3161d9f89f43545414051cf46cd675ae97340c3acb007a93b494" Jan 27 13:50:21 crc kubenswrapper[4914]: I0127 13:50:21.492536 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8q7ck"] Jan 27 13:50:21 crc kubenswrapper[4914]: I0127 13:50:21.494289 4914 scope.go:117] "RemoveContainer" containerID="34239dcd89cfaf4e1cf48148113709125ae8f51ed16f7bc222bfb4197ed96dc7" Jan 27 13:50:21 crc kubenswrapper[4914]: I0127 13:50:21.497692 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8q7ck"] Jan 27 13:50:22 crc kubenswrapper[4914]: I0127 13:50:22.300766 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" path="/var/lib/kubelet/pods/1f2a8a9b-5334-4de2-9198-7677a52f8002/volumes" Jan 27 13:50:22 crc kubenswrapper[4914]: I0127 13:50:22.641817 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-62njv"] Jan 27 13:50:37 crc kubenswrapper[4914]: I0127 13:50:37.731577 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:50:37 crc kubenswrapper[4914]: I0127 13:50:37.732188 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:50:47 crc kubenswrapper[4914]: I0127 13:50:47.669892 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-62njv" podUID="109be131-7cbd-4205-b5c7-eaf7790737f4" containerName="oauth-openshift" containerID="cri-o://ba212f6328b03ccab122f1d64ec8b61dce5bb36cab230ffc84c14459559b3b18" gracePeriod=15 Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.106236 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.146213 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-697f994c97-lfq74"] Jan 27 13:50:48 crc kubenswrapper[4914]: E0127 13:50:48.146464 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" containerName="registry-server" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.146478 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" containerName="registry-server" Jan 27 13:50:48 crc kubenswrapper[4914]: E0127 13:50:48.146500 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" containerName="extract-utilities" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.146512 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" containerName="extract-utilities" Jan 27 13:50:48 crc kubenswrapper[4914]: E0127 13:50:48.146523 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" containerName="extract-content" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.146530 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" containerName="extract-content" Jan 27 13:50:48 crc kubenswrapper[4914]: E0127 13:50:48.146541 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="109be131-7cbd-4205-b5c7-eaf7790737f4" containerName="oauth-openshift" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.146547 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="109be131-7cbd-4205-b5c7-eaf7790737f4" containerName="oauth-openshift" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.146653 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="109be131-7cbd-4205-b5c7-eaf7790737f4" containerName="oauth-openshift" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.146668 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2a8a9b-5334-4de2-9198-7677a52f8002" containerName="registry-server" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.147077 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.161022 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-697f994c97-lfq74"] Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.268523 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-session\") pod \"109be131-7cbd-4205-b5c7-eaf7790737f4\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.268565 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-cliconfig\") pod \"109be131-7cbd-4205-b5c7-eaf7790737f4\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.268625 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-provider-selection\") pod \"109be131-7cbd-4205-b5c7-eaf7790737f4\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.268645 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6btz\" (UniqueName: \"kubernetes.io/projected/109be131-7cbd-4205-b5c7-eaf7790737f4-kube-api-access-t6btz\") pod \"109be131-7cbd-4205-b5c7-eaf7790737f4\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.268695 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-ocp-branding-template\") pod \"109be131-7cbd-4205-b5c7-eaf7790737f4\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269206 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-service-ca\") pod \"109be131-7cbd-4205-b5c7-eaf7790737f4\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269235 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-error\") pod \"109be131-7cbd-4205-b5c7-eaf7790737f4\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269255 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-trusted-ca-bundle\") pod \"109be131-7cbd-4205-b5c7-eaf7790737f4\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269272 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-router-certs\") pod \"109be131-7cbd-4205-b5c7-eaf7790737f4\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269294 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-idp-0-file-data\") pod \"109be131-7cbd-4205-b5c7-eaf7790737f4\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269313 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-serving-cert\") pod \"109be131-7cbd-4205-b5c7-eaf7790737f4\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269334 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-audit-policies\") pod \"109be131-7cbd-4205-b5c7-eaf7790737f4\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269351 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-login\") pod \"109be131-7cbd-4205-b5c7-eaf7790737f4\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269385 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/109be131-7cbd-4205-b5c7-eaf7790737f4-audit-dir\") pod \"109be131-7cbd-4205-b5c7-eaf7790737f4\" (UID: \"109be131-7cbd-4205-b5c7-eaf7790737f4\") " Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269493 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-session\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269516 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269541 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwh7w\" (UniqueName: \"kubernetes.io/projected/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-kube-api-access-xwh7w\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269556 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269574 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269592 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-user-template-error\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269610 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-router-certs\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269627 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269655 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-audit-dir\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269673 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269691 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-audit-policies\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269711 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-service-ca\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269746 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269763 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-user-template-login\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.269572 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "109be131-7cbd-4205-b5c7-eaf7790737f4" (UID: "109be131-7cbd-4205-b5c7-eaf7790737f4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.270451 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/109be131-7cbd-4205-b5c7-eaf7790737f4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "109be131-7cbd-4205-b5c7-eaf7790737f4" (UID: "109be131-7cbd-4205-b5c7-eaf7790737f4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.270502 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "109be131-7cbd-4205-b5c7-eaf7790737f4" (UID: "109be131-7cbd-4205-b5c7-eaf7790737f4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.270648 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "109be131-7cbd-4205-b5c7-eaf7790737f4" (UID: "109be131-7cbd-4205-b5c7-eaf7790737f4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.270738 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "109be131-7cbd-4205-b5c7-eaf7790737f4" (UID: "109be131-7cbd-4205-b5c7-eaf7790737f4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.278224 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "109be131-7cbd-4205-b5c7-eaf7790737f4" (UID: "109be131-7cbd-4205-b5c7-eaf7790737f4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.278560 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "109be131-7cbd-4205-b5c7-eaf7790737f4" (UID: "109be131-7cbd-4205-b5c7-eaf7790737f4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.278939 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "109be131-7cbd-4205-b5c7-eaf7790737f4" (UID: "109be131-7cbd-4205-b5c7-eaf7790737f4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.279172 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "109be131-7cbd-4205-b5c7-eaf7790737f4" (UID: "109be131-7cbd-4205-b5c7-eaf7790737f4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.279320 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "109be131-7cbd-4205-b5c7-eaf7790737f4" (UID: "109be131-7cbd-4205-b5c7-eaf7790737f4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.279700 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "109be131-7cbd-4205-b5c7-eaf7790737f4" (UID: "109be131-7cbd-4205-b5c7-eaf7790737f4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.282151 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/109be131-7cbd-4205-b5c7-eaf7790737f4-kube-api-access-t6btz" (OuterVolumeSpecName: "kube-api-access-t6btz") pod "109be131-7cbd-4205-b5c7-eaf7790737f4" (UID: "109be131-7cbd-4205-b5c7-eaf7790737f4"). InnerVolumeSpecName "kube-api-access-t6btz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.284273 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "109be131-7cbd-4205-b5c7-eaf7790737f4" (UID: "109be131-7cbd-4205-b5c7-eaf7790737f4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.284293 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "109be131-7cbd-4205-b5c7-eaf7790737f4" (UID: "109be131-7cbd-4205-b5c7-eaf7790737f4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.370996 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371071 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-audit-dir\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371096 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371117 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-audit-policies\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371149 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-service-ca\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371195 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371219 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-user-template-login\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371260 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-session\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371291 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371326 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwh7w\" (UniqueName: \"kubernetes.io/projected/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-kube-api-access-xwh7w\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371349 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371387 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371412 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-user-template-error\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371439 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-router-certs\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371491 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371517 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6btz\" (UniqueName: \"kubernetes.io/projected/109be131-7cbd-4205-b5c7-eaf7790737f4-kube-api-access-t6btz\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371531 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371545 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371559 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371209 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-audit-dir\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.371696 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.372089 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.372249 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-service-ca\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.372487 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-audit-policies\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.372589 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.372618 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.372637 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.372654 4914 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.372670 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.372689 4914 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/109be131-7cbd-4205-b5c7-eaf7790737f4-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.372708 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.372725 4914 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/109be131-7cbd-4205-b5c7-eaf7790737f4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.372957 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.374546 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-router-certs\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.374638 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-user-template-login\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.375027 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.375100 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.375244 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-user-template-error\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.375503 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.375863 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-system-session\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.376190 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.386552 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwh7w\" (UniqueName: \"kubernetes.io/projected/5ff037c0-1fca-4796-bf45-7cfb4fff17f1-kube-api-access-xwh7w\") pod \"oauth-openshift-697f994c97-lfq74\" (UID: \"5ff037c0-1fca-4796-bf45-7cfb4fff17f1\") " pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.465557 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.648637 4914 generic.go:334] "Generic (PLEG): container finished" podID="109be131-7cbd-4205-b5c7-eaf7790737f4" containerID="ba212f6328b03ccab122f1d64ec8b61dce5bb36cab230ffc84c14459559b3b18" exitCode=0 Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.648697 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-62njv" event={"ID":"109be131-7cbd-4205-b5c7-eaf7790737f4","Type":"ContainerDied","Data":"ba212f6328b03ccab122f1d64ec8b61dce5bb36cab230ffc84c14459559b3b18"} Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.648724 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-62njv" event={"ID":"109be131-7cbd-4205-b5c7-eaf7790737f4","Type":"ContainerDied","Data":"698b81a8e6bc81a0eec15627733b169e6a9b73343c0fa17729ff40e85dd3d605"} Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.648719 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-62njv" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.648738 4914 scope.go:117] "RemoveContainer" containerID="ba212f6328b03ccab122f1d64ec8b61dce5bb36cab230ffc84c14459559b3b18" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.667229 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-62njv"] Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.670910 4914 scope.go:117] "RemoveContainer" containerID="ba212f6328b03ccab122f1d64ec8b61dce5bb36cab230ffc84c14459559b3b18" Jan 27 13:50:48 crc kubenswrapper[4914]: E0127 13:50:48.671407 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba212f6328b03ccab122f1d64ec8b61dce5bb36cab230ffc84c14459559b3b18\": container with ID starting with ba212f6328b03ccab122f1d64ec8b61dce5bb36cab230ffc84c14459559b3b18 not found: ID does not exist" containerID="ba212f6328b03ccab122f1d64ec8b61dce5bb36cab230ffc84c14459559b3b18" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.671454 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba212f6328b03ccab122f1d64ec8b61dce5bb36cab230ffc84c14459559b3b18"} err="failed to get container status \"ba212f6328b03ccab122f1d64ec8b61dce5bb36cab230ffc84c14459559b3b18\": rpc error: code = NotFound desc = could not find container \"ba212f6328b03ccab122f1d64ec8b61dce5bb36cab230ffc84c14459559b3b18\": container with ID starting with ba212f6328b03ccab122f1d64ec8b61dce5bb36cab230ffc84c14459559b3b18 not found: ID does not exist" Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.671782 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-62njv"] Jan 27 13:50:48 crc kubenswrapper[4914]: I0127 13:50:48.857316 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-697f994c97-lfq74"] Jan 27 13:50:49 crc kubenswrapper[4914]: I0127 13:50:49.655862 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" event={"ID":"5ff037c0-1fca-4796-bf45-7cfb4fff17f1","Type":"ContainerStarted","Data":"06f4301ca2420a5d96a83ea7f963d0c894fa67a787ef87957bee606dbeb8913b"} Jan 27 13:50:49 crc kubenswrapper[4914]: I0127 13:50:49.656263 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" event={"ID":"5ff037c0-1fca-4796-bf45-7cfb4fff17f1","Type":"ContainerStarted","Data":"abecf7b03948376136e91b016fdac1d476fc2f7a80ab7cd85c83818c027772fa"} Jan 27 13:50:49 crc kubenswrapper[4914]: I0127 13:50:49.656290 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:49 crc kubenswrapper[4914]: I0127 13:50:49.662071 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" Jan 27 13:50:49 crc kubenswrapper[4914]: I0127 13:50:49.683672 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-697f994c97-lfq74" podStartSLOduration=27.683656215 podStartE2EDuration="27.683656215s" podCreationTimestamp="2026-01-27 13:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:50:49.680979361 +0000 UTC m=+407.993329486" watchObservedRunningTime="2026-01-27 13:50:49.683656215 +0000 UTC m=+407.996006300" Jan 27 13:50:50 crc kubenswrapper[4914]: I0127 13:50:50.305495 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="109be131-7cbd-4205-b5c7-eaf7790737f4" path="/var/lib/kubelet/pods/109be131-7cbd-4205-b5c7-eaf7790737f4/volumes" Jan 27 13:51:07 crc kubenswrapper[4914]: I0127 13:51:07.690771 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:51:07 crc kubenswrapper[4914]: I0127 13:51:07.691306 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:51:07 crc kubenswrapper[4914]: I0127 13:51:07.691346 4914 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:51:07 crc kubenswrapper[4914]: I0127 13:51:07.692345 4914 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"95915b10dd9749a36c926b1d56b1160495b6cbef34f668e33fde194019445d27"} pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:51:07 crc kubenswrapper[4914]: I0127 13:51:07.692396 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" containerID="cri-o://95915b10dd9749a36c926b1d56b1160495b6cbef34f668e33fde194019445d27" gracePeriod=600 Jan 27 13:51:08 crc kubenswrapper[4914]: I0127 13:51:08.753552 4914 generic.go:334] "Generic (PLEG): container finished" podID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerID="95915b10dd9749a36c926b1d56b1160495b6cbef34f668e33fde194019445d27" exitCode=0 Jan 27 13:51:08 crc kubenswrapper[4914]: I0127 13:51:08.753637 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerDied","Data":"95915b10dd9749a36c926b1d56b1160495b6cbef34f668e33fde194019445d27"} Jan 27 13:51:08 crc kubenswrapper[4914]: I0127 13:51:08.754086 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerStarted","Data":"4133f9318dfd6995faf0a783971bc157faacba434797a9f650945b1d398d1c43"} Jan 27 13:51:08 crc kubenswrapper[4914]: I0127 13:51:08.754104 4914 scope.go:117] "RemoveContainer" containerID="8369227e8a20232cee86dd52751217dd792c3e253ac52c3ffc2588b4c3a11dbf" Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.266204 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pkm2z"] Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.267100 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pkm2z" podUID="dc6a9b51-d0a6-4370-94bd-342dcfa54a99" containerName="registry-server" containerID="cri-o://9fa6691ec60a1b026f9c868932ff96f82aed67fa8963b26e379ba43a4e13ebb2" gracePeriod=30 Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.268114 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lgzjm"] Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.268301 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lgzjm" podUID="02984395-bee4-40bd-98ab-2bf03009bb9f" containerName="registry-server" containerID="cri-o://0ba317a053426c16839cb80378793b21d7e2bf157b3a26eefa3216d7960fdbd5" gracePeriod=30 Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.273742 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w75kk"] Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.273983 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" podUID="7656576b-aeae-4b15-b2ab-18658770a1e5" containerName="marketplace-operator" containerID="cri-o://3f2da090eeac987e00b02a97f9eeb0671e65f712b676b2730f227452feeda9a4" gracePeriod=30 Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.289425 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfpmr"] Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.289718 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nfpmr" podUID="a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" containerName="registry-server" containerID="cri-o://7a591c9c0f61df1372063c828e5123d253b4baa01e78bd844bedd97e5156edf4" gracePeriod=30 Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.305608 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zg4gg"] Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.305905 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zg4gg" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" containerName="registry-server" containerID="cri-o://09099fc04acf5029a51806c057cdf4a292b4c43f0426790935afde6c3781162b" gracePeriod=30 Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.315925 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bdpcf"] Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.316683 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.319854 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bdpcf"] Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.370496 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ea81bc4-78b8-4b11-a245-95037884bbde-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bdpcf\" (UID: \"7ea81bc4-78b8-4b11-a245-95037884bbde\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.370813 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7ea81bc4-78b8-4b11-a245-95037884bbde-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bdpcf\" (UID: \"7ea81bc4-78b8-4b11-a245-95037884bbde\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.371020 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8phq2\" (UniqueName: \"kubernetes.io/projected/7ea81bc4-78b8-4b11-a245-95037884bbde-kube-api-access-8phq2\") pod \"marketplace-operator-79b997595-bdpcf\" (UID: \"7ea81bc4-78b8-4b11-a245-95037884bbde\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.472222 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7ea81bc4-78b8-4b11-a245-95037884bbde-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bdpcf\" (UID: \"7ea81bc4-78b8-4b11-a245-95037884bbde\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.472532 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8phq2\" (UniqueName: \"kubernetes.io/projected/7ea81bc4-78b8-4b11-a245-95037884bbde-kube-api-access-8phq2\") pod \"marketplace-operator-79b997595-bdpcf\" (UID: \"7ea81bc4-78b8-4b11-a245-95037884bbde\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.472656 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ea81bc4-78b8-4b11-a245-95037884bbde-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bdpcf\" (UID: \"7ea81bc4-78b8-4b11-a245-95037884bbde\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.474098 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ea81bc4-78b8-4b11-a245-95037884bbde-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bdpcf\" (UID: \"7ea81bc4-78b8-4b11-a245-95037884bbde\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.478229 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7ea81bc4-78b8-4b11-a245-95037884bbde-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bdpcf\" (UID: \"7ea81bc4-78b8-4b11-a245-95037884bbde\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.491018 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8phq2\" (UniqueName: \"kubernetes.io/projected/7ea81bc4-78b8-4b11-a245-95037884bbde-kube-api-access-8phq2\") pod \"marketplace-operator-79b997595-bdpcf\" (UID: \"7ea81bc4-78b8-4b11-a245-95037884bbde\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.638194 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.783402 4914 generic.go:334] "Generic (PLEG): container finished" podID="a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" containerID="7a591c9c0f61df1372063c828e5123d253b4baa01e78bd844bedd97e5156edf4" exitCode=0 Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.783497 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfpmr" event={"ID":"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab","Type":"ContainerDied","Data":"7a591c9c0f61df1372063c828e5123d253b4baa01e78bd844bedd97e5156edf4"} Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.786756 4914 generic.go:334] "Generic (PLEG): container finished" podID="7656576b-aeae-4b15-b2ab-18658770a1e5" containerID="3f2da090eeac987e00b02a97f9eeb0671e65f712b676b2730f227452feeda9a4" exitCode=0 Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.786844 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" event={"ID":"7656576b-aeae-4b15-b2ab-18658770a1e5","Type":"ContainerDied","Data":"3f2da090eeac987e00b02a97f9eeb0671e65f712b676b2730f227452feeda9a4"} Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.786899 4914 scope.go:117] "RemoveContainer" containerID="1ae8590c8f2e12da5c32dca51ec53fe0bb5f6771669d305ca3d17672f3571794" Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.788779 4914 generic.go:334] "Generic (PLEG): container finished" podID="02984395-bee4-40bd-98ab-2bf03009bb9f" containerID="0ba317a053426c16839cb80378793b21d7e2bf157b3a26eefa3216d7960fdbd5" exitCode=0 Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.788931 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgzjm" event={"ID":"02984395-bee4-40bd-98ab-2bf03009bb9f","Type":"ContainerDied","Data":"0ba317a053426c16839cb80378793b21d7e2bf157b3a26eefa3216d7960fdbd5"} Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.792151 4914 generic.go:334] "Generic (PLEG): container finished" podID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" containerID="09099fc04acf5029a51806c057cdf4a292b4c43f0426790935afde6c3781162b" exitCode=0 Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.792202 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg4gg" event={"ID":"d6cc3d29-abfa-4a3e-8251-5811e2bab91e","Type":"ContainerDied","Data":"09099fc04acf5029a51806c057cdf4a292b4c43f0426790935afde6c3781162b"} Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.794623 4914 generic.go:334] "Generic (PLEG): container finished" podID="dc6a9b51-d0a6-4370-94bd-342dcfa54a99" containerID="9fa6691ec60a1b026f9c868932ff96f82aed67fa8963b26e379ba43a4e13ebb2" exitCode=0 Jan 27 13:51:12 crc kubenswrapper[4914]: I0127 13:51:12.794672 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pkm2z" event={"ID":"dc6a9b51-d0a6-4370-94bd-342dcfa54a99","Type":"ContainerDied","Data":"9fa6691ec60a1b026f9c868932ff96f82aed67fa8963b26e379ba43a4e13ebb2"} Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.028108 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bdpcf"] Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.171255 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.282502 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-catalog-content\") pod \"dc6a9b51-d0a6-4370-94bd-342dcfa54a99\" (UID: \"dc6a9b51-d0a6-4370-94bd-342dcfa54a99\") " Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.282543 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6frj\" (UniqueName: \"kubernetes.io/projected/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-kube-api-access-m6frj\") pod \"dc6a9b51-d0a6-4370-94bd-342dcfa54a99\" (UID: \"dc6a9b51-d0a6-4370-94bd-342dcfa54a99\") " Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.282649 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-utilities\") pod \"dc6a9b51-d0a6-4370-94bd-342dcfa54a99\" (UID: \"dc6a9b51-d0a6-4370-94bd-342dcfa54a99\") " Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.283464 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-utilities" (OuterVolumeSpecName: "utilities") pod "dc6a9b51-d0a6-4370-94bd-342dcfa54a99" (UID: "dc6a9b51-d0a6-4370-94bd-342dcfa54a99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.309066 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.309233 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-kube-api-access-m6frj" (OuterVolumeSpecName: "kube-api-access-m6frj") pod "dc6a9b51-d0a6-4370-94bd-342dcfa54a99" (UID: "dc6a9b51-d0a6-4370-94bd-342dcfa54a99"). InnerVolumeSpecName "kube-api-access-m6frj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.322756 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.330423 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.384107 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02984395-bee4-40bd-98ab-2bf03009bb9f-catalog-content\") pod \"02984395-bee4-40bd-98ab-2bf03009bb9f\" (UID: \"02984395-bee4-40bd-98ab-2bf03009bb9f\") " Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.384162 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02984395-bee4-40bd-98ab-2bf03009bb9f-utilities\") pod \"02984395-bee4-40bd-98ab-2bf03009bb9f\" (UID: \"02984395-bee4-40bd-98ab-2bf03009bb9f\") " Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.384194 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g25c2\" (UniqueName: \"kubernetes.io/projected/02984395-bee4-40bd-98ab-2bf03009bb9f-kube-api-access-g25c2\") pod \"02984395-bee4-40bd-98ab-2bf03009bb9f\" (UID: \"02984395-bee4-40bd-98ab-2bf03009bb9f\") " Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.384264 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-utilities\") pod \"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab\" (UID: \"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab\") " Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.384293 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxgxx\" (UniqueName: \"kubernetes.io/projected/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-kube-api-access-dxgxx\") pod \"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab\" (UID: \"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab\") " Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.384333 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfl9x\" (UniqueName: \"kubernetes.io/projected/7656576b-aeae-4b15-b2ab-18658770a1e5-kube-api-access-dfl9x\") pod \"7656576b-aeae-4b15-b2ab-18658770a1e5\" (UID: \"7656576b-aeae-4b15-b2ab-18658770a1e5\") " Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.384373 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7656576b-aeae-4b15-b2ab-18658770a1e5-marketplace-operator-metrics\") pod \"7656576b-aeae-4b15-b2ab-18658770a1e5\" (UID: \"7656576b-aeae-4b15-b2ab-18658770a1e5\") " Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.384401 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7656576b-aeae-4b15-b2ab-18658770a1e5-marketplace-trusted-ca\") pod \"7656576b-aeae-4b15-b2ab-18658770a1e5\" (UID: \"7656576b-aeae-4b15-b2ab-18658770a1e5\") " Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.384430 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-catalog-content\") pod \"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab\" (UID: \"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab\") " Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.384657 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6frj\" (UniqueName: \"kubernetes.io/projected/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-kube-api-access-m6frj\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.384687 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.388580 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-utilities" (OuterVolumeSpecName: "utilities") pod "a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" (UID: "a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.389479 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7656576b-aeae-4b15-b2ab-18658770a1e5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7656576b-aeae-4b15-b2ab-18658770a1e5" (UID: "7656576b-aeae-4b15-b2ab-18658770a1e5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.391675 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02984395-bee4-40bd-98ab-2bf03009bb9f-utilities" (OuterVolumeSpecName: "utilities") pod "02984395-bee4-40bd-98ab-2bf03009bb9f" (UID: "02984395-bee4-40bd-98ab-2bf03009bb9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.393023 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02984395-bee4-40bd-98ab-2bf03009bb9f-kube-api-access-g25c2" (OuterVolumeSpecName: "kube-api-access-g25c2") pod "02984395-bee4-40bd-98ab-2bf03009bb9f" (UID: "02984395-bee4-40bd-98ab-2bf03009bb9f"). InnerVolumeSpecName "kube-api-access-g25c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.393058 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-kube-api-access-dxgxx" (OuterVolumeSpecName: "kube-api-access-dxgxx") pod "a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" (UID: "a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab"). InnerVolumeSpecName "kube-api-access-dxgxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.393394 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7656576b-aeae-4b15-b2ab-18658770a1e5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7656576b-aeae-4b15-b2ab-18658770a1e5" (UID: "7656576b-aeae-4b15-b2ab-18658770a1e5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.393471 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7656576b-aeae-4b15-b2ab-18658770a1e5-kube-api-access-dfl9x" (OuterVolumeSpecName: "kube-api-access-dfl9x") pod "7656576b-aeae-4b15-b2ab-18658770a1e5" (UID: "7656576b-aeae-4b15-b2ab-18658770a1e5"). InnerVolumeSpecName "kube-api-access-dfl9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.395356 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc6a9b51-d0a6-4370-94bd-342dcfa54a99" (UID: "dc6a9b51-d0a6-4370-94bd-342dcfa54a99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.423984 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" (UID: "a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.442909 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.451049 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02984395-bee4-40bd-98ab-2bf03009bb9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02984395-bee4-40bd-98ab-2bf03009bb9f" (UID: "02984395-bee4-40bd-98ab-2bf03009bb9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.485280 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-catalog-content\") pod \"d6cc3d29-abfa-4a3e-8251-5811e2bab91e\" (UID: \"d6cc3d29-abfa-4a3e-8251-5811e2bab91e\") " Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.485437 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgvgs\" (UniqueName: \"kubernetes.io/projected/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-kube-api-access-mgvgs\") pod \"d6cc3d29-abfa-4a3e-8251-5811e2bab91e\" (UID: \"d6cc3d29-abfa-4a3e-8251-5811e2bab91e\") " Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.485480 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-utilities\") pod \"d6cc3d29-abfa-4a3e-8251-5811e2bab91e\" (UID: \"d6cc3d29-abfa-4a3e-8251-5811e2bab91e\") " Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.485684 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxgxx\" (UniqueName: \"kubernetes.io/projected/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-kube-api-access-dxgxx\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.485702 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfl9x\" (UniqueName: \"kubernetes.io/projected/7656576b-aeae-4b15-b2ab-18658770a1e5-kube-api-access-dfl9x\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.485714 4914 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7656576b-aeae-4b15-b2ab-18658770a1e5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.485728 4914 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7656576b-aeae-4b15-b2ab-18658770a1e5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.485768 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.485781 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc6a9b51-d0a6-4370-94bd-342dcfa54a99-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.485793 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02984395-bee4-40bd-98ab-2bf03009bb9f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.485805 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02984395-bee4-40bd-98ab-2bf03009bb9f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.485815 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g25c2\" (UniqueName: \"kubernetes.io/projected/02984395-bee4-40bd-98ab-2bf03009bb9f-kube-api-access-g25c2\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.485840 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.486501 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-utilities" (OuterVolumeSpecName: "utilities") pod "d6cc3d29-abfa-4a3e-8251-5811e2bab91e" (UID: "d6cc3d29-abfa-4a3e-8251-5811e2bab91e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.489022 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-kube-api-access-mgvgs" (OuterVolumeSpecName: "kube-api-access-mgvgs") pod "d6cc3d29-abfa-4a3e-8251-5811e2bab91e" (UID: "d6cc3d29-abfa-4a3e-8251-5811e2bab91e"). InnerVolumeSpecName "kube-api-access-mgvgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.587429 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.587787 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgvgs\" (UniqueName: \"kubernetes.io/projected/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-kube-api-access-mgvgs\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.623030 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6cc3d29-abfa-4a3e-8251-5811e2bab91e" (UID: "d6cc3d29-abfa-4a3e-8251-5811e2bab91e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.689265 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6cc3d29-abfa-4a3e-8251-5811e2bab91e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.803035 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfpmr" event={"ID":"a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab","Type":"ContainerDied","Data":"e9750e9124154fcd0a987dbe655aeec48f6bb474c9dc4797ac61f938943ecb2a"} Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.803103 4914 scope.go:117] "RemoveContainer" containerID="7a591c9c0f61df1372063c828e5123d253b4baa01e78bd844bedd97e5156edf4" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.803058 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfpmr" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.805245 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" event={"ID":"7656576b-aeae-4b15-b2ab-18658770a1e5","Type":"ContainerDied","Data":"f00c094b2e022af70c559db27cdfbc4a8573e2bbbe1d2dfaaaae45ec9835e66b"} Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.805273 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w75kk" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.806683 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" event={"ID":"7ea81bc4-78b8-4b11-a245-95037884bbde","Type":"ContainerStarted","Data":"ba991d27e8a2a1249337272d868965061a1f1de02122fe792b2def9396c9719f"} Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.806708 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" event={"ID":"7ea81bc4-78b8-4b11-a245-95037884bbde","Type":"ContainerStarted","Data":"b316943b4c1830a9de20fc6034b5f3a8490827273da4bdf7231c17d8e99d5b4c"} Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.807135 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.808283 4914 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bdpcf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" start-of-body= Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.808318 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" podUID="7ea81bc4-78b8-4b11-a245-95037884bbde" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.809458 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgzjm" event={"ID":"02984395-bee4-40bd-98ab-2bf03009bb9f","Type":"ContainerDied","Data":"890770ee8db7e33ba7dfa78e1c2d1de3690d432d8bd59aa6c8150a8c6433f400"} Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.809560 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgzjm" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.817900 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pkm2z" event={"ID":"dc6a9b51-d0a6-4370-94bd-342dcfa54a99","Type":"ContainerDied","Data":"a2badf41b5f42b65537d0a64298f09822fe0b006f6f9c90c91192a1563880658"} Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.817936 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pkm2z" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.820698 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg4gg" event={"ID":"d6cc3d29-abfa-4a3e-8251-5811e2bab91e","Type":"ContainerDied","Data":"2a15ecc62b88f242b2852343c03490b5e1b22da34f61531b9df27c5ddb3725e1"} Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.820786 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zg4gg" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.825154 4914 scope.go:117] "RemoveContainer" containerID="8d0ad1a1be03af1c2d14eb84bfd0f7e7de3b496018888e7702b9a1b583441f3e" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.846276 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" podStartSLOduration=1.846258679 podStartE2EDuration="1.846258679s" podCreationTimestamp="2026-01-27 13:51:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:51:13.843056002 +0000 UTC m=+432.155406087" watchObservedRunningTime="2026-01-27 13:51:13.846258679 +0000 UTC m=+432.158608764" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.858139 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w75kk"] Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.862711 4914 scope.go:117] "RemoveContainer" containerID="ca3ad87478d297fa47635788c287edca8840f9dfc050395c2f79e34fc72cba82" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.865468 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w75kk"] Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.869507 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lgzjm"] Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.873761 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lgzjm"] Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.880614 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pkm2z"] Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.883813 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pkm2z"] Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.895797 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfpmr"] Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.902976 4914 scope.go:117] "RemoveContainer" containerID="3f2da090eeac987e00b02a97f9eeb0671e65f712b676b2730f227452feeda9a4" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.908400 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfpmr"] Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.913788 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zg4gg"] Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.919679 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zg4gg"] Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.919745 4914 scope.go:117] "RemoveContainer" containerID="0ba317a053426c16839cb80378793b21d7e2bf157b3a26eefa3216d7960fdbd5" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.939617 4914 scope.go:117] "RemoveContainer" containerID="b7fe8e702dbb5ec62c312b5eb8254b98bc91652774696ab97d0573b0392d1faf" Jan 27 13:51:13 crc kubenswrapper[4914]: I0127 13:51:13.989116 4914 scope.go:117] "RemoveContainer" containerID="b3528b0aa71be1bd716f42ca84a51d4e70cf49bbcd0f06da9e797088658d81b3" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.004348 4914 scope.go:117] "RemoveContainer" containerID="9fa6691ec60a1b026f9c868932ff96f82aed67fa8963b26e379ba43a4e13ebb2" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.022564 4914 scope.go:117] "RemoveContainer" containerID="330973c79fd15b9aff414e161e42a0a5d59b3177cc62bb3cc6e2571c637e2a76" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.037044 4914 scope.go:117] "RemoveContainer" containerID="953a306865315e87452db34b5179322bec808bb6108b207635b49dc7e4dfcf00" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.051665 4914 scope.go:117] "RemoveContainer" containerID="09099fc04acf5029a51806c057cdf4a292b4c43f0426790935afde6c3781162b" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.066052 4914 scope.go:117] "RemoveContainer" containerID="6ff07372258f4881de67c680c619422a7f5fe615f4a71e1d8806d160aa70ee6b" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.083362 4914 scope.go:117] "RemoveContainer" containerID="92a5484454dbd611740ebe8c370fe95abe7dad83b8c5069c4048147c86990843" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.301314 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02984395-bee4-40bd-98ab-2bf03009bb9f" path="/var/lib/kubelet/pods/02984395-bee4-40bd-98ab-2bf03009bb9f/volumes" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.302114 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7656576b-aeae-4b15-b2ab-18658770a1e5" path="/var/lib/kubelet/pods/7656576b-aeae-4b15-b2ab-18658770a1e5/volumes" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.302635 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" path="/var/lib/kubelet/pods/a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab/volumes" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.303642 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" path="/var/lib/kubelet/pods/d6cc3d29-abfa-4a3e-8251-5811e2bab91e/volumes" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.304253 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc6a9b51-d0a6-4370-94bd-342dcfa54a99" path="/var/lib/kubelet/pods/dc6a9b51-d0a6-4370-94bd-342dcfa54a99/volumes" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.480497 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t69tv"] Jan 27 13:51:14 crc kubenswrapper[4914]: E0127 13:51:14.480820 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" containerName="extract-content" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.480871 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" containerName="extract-content" Jan 27 13:51:14 crc kubenswrapper[4914]: E0127 13:51:14.480885 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" containerName="extract-utilities" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.480893 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" containerName="extract-utilities" Jan 27 13:51:14 crc kubenswrapper[4914]: E0127 13:51:14.480905 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" containerName="registry-server" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.480912 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" containerName="registry-server" Jan 27 13:51:14 crc kubenswrapper[4914]: E0127 13:51:14.480957 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7656576b-aeae-4b15-b2ab-18658770a1e5" containerName="marketplace-operator" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.480965 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="7656576b-aeae-4b15-b2ab-18658770a1e5" containerName="marketplace-operator" Jan 27 13:51:14 crc kubenswrapper[4914]: E0127 13:51:14.480975 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02984395-bee4-40bd-98ab-2bf03009bb9f" containerName="extract-utilities" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.480983 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="02984395-bee4-40bd-98ab-2bf03009bb9f" containerName="extract-utilities" Jan 27 13:51:14 crc kubenswrapper[4914]: E0127 13:51:14.480993 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6a9b51-d0a6-4370-94bd-342dcfa54a99" containerName="extract-utilities" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.481026 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6a9b51-d0a6-4370-94bd-342dcfa54a99" containerName="extract-utilities" Jan 27 13:51:14 crc kubenswrapper[4914]: E0127 13:51:14.481043 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7656576b-aeae-4b15-b2ab-18658770a1e5" containerName="marketplace-operator" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.481050 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="7656576b-aeae-4b15-b2ab-18658770a1e5" containerName="marketplace-operator" Jan 27 13:51:14 crc kubenswrapper[4914]: E0127 13:51:14.481153 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6a9b51-d0a6-4370-94bd-342dcfa54a99" containerName="extract-content" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.481201 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6a9b51-d0a6-4370-94bd-342dcfa54a99" containerName="extract-content" Jan 27 13:51:14 crc kubenswrapper[4914]: E0127 13:51:14.481214 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02984395-bee4-40bd-98ab-2bf03009bb9f" containerName="extract-content" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.481223 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="02984395-bee4-40bd-98ab-2bf03009bb9f" containerName="extract-content" Jan 27 13:51:14 crc kubenswrapper[4914]: E0127 13:51:14.481235 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02984395-bee4-40bd-98ab-2bf03009bb9f" containerName="registry-server" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.481242 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="02984395-bee4-40bd-98ab-2bf03009bb9f" containerName="registry-server" Jan 27 13:51:14 crc kubenswrapper[4914]: E0127 13:51:14.481281 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" containerName="extract-content" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.481291 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" containerName="extract-content" Jan 27 13:51:14 crc kubenswrapper[4914]: E0127 13:51:14.481303 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" containerName="extract-utilities" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.481311 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" containerName="extract-utilities" Jan 27 13:51:14 crc kubenswrapper[4914]: E0127 13:51:14.481320 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6a9b51-d0a6-4370-94bd-342dcfa54a99" containerName="registry-server" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.481329 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6a9b51-d0a6-4370-94bd-342dcfa54a99" containerName="registry-server" Jan 27 13:51:14 crc kubenswrapper[4914]: E0127 13:51:14.481366 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" containerName="registry-server" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.481375 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" containerName="registry-server" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.481539 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6a9b51-d0a6-4370-94bd-342dcfa54a99" containerName="registry-server" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.481561 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="02984395-bee4-40bd-98ab-2bf03009bb9f" containerName="registry-server" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.481570 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1bbb3b4-c5a3-4939-bf5e-9d6c525ac6ab" containerName="registry-server" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.481579 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="7656576b-aeae-4b15-b2ab-18658770a1e5" containerName="marketplace-operator" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.481616 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="7656576b-aeae-4b15-b2ab-18658770a1e5" containerName="marketplace-operator" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.481627 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cc3d29-abfa-4a3e-8251-5811e2bab91e" containerName="registry-server" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.482849 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t69tv" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.485428 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.489858 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t69tv"] Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.600147 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhll7\" (UniqueName: \"kubernetes.io/projected/0a47381b-bbaa-48f0-93d6-06bdd256dcc1-kube-api-access-mhll7\") pod \"redhat-marketplace-t69tv\" (UID: \"0a47381b-bbaa-48f0-93d6-06bdd256dcc1\") " pod="openshift-marketplace/redhat-marketplace-t69tv" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.600248 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a47381b-bbaa-48f0-93d6-06bdd256dcc1-utilities\") pod \"redhat-marketplace-t69tv\" (UID: \"0a47381b-bbaa-48f0-93d6-06bdd256dcc1\") " pod="openshift-marketplace/redhat-marketplace-t69tv" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.600275 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a47381b-bbaa-48f0-93d6-06bdd256dcc1-catalog-content\") pod \"redhat-marketplace-t69tv\" (UID: \"0a47381b-bbaa-48f0-93d6-06bdd256dcc1\") " pod="openshift-marketplace/redhat-marketplace-t69tv" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.675792 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5ll2s"] Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.676774 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5ll2s" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.679084 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.688138 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5ll2s"] Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.701057 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhll7\" (UniqueName: \"kubernetes.io/projected/0a47381b-bbaa-48f0-93d6-06bdd256dcc1-kube-api-access-mhll7\") pod \"redhat-marketplace-t69tv\" (UID: \"0a47381b-bbaa-48f0-93d6-06bdd256dcc1\") " pod="openshift-marketplace/redhat-marketplace-t69tv" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.701145 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a47381b-bbaa-48f0-93d6-06bdd256dcc1-utilities\") pod \"redhat-marketplace-t69tv\" (UID: \"0a47381b-bbaa-48f0-93d6-06bdd256dcc1\") " pod="openshift-marketplace/redhat-marketplace-t69tv" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.701182 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a47381b-bbaa-48f0-93d6-06bdd256dcc1-catalog-content\") pod \"redhat-marketplace-t69tv\" (UID: \"0a47381b-bbaa-48f0-93d6-06bdd256dcc1\") " pod="openshift-marketplace/redhat-marketplace-t69tv" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.701602 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a47381b-bbaa-48f0-93d6-06bdd256dcc1-catalog-content\") pod \"redhat-marketplace-t69tv\" (UID: \"0a47381b-bbaa-48f0-93d6-06bdd256dcc1\") " pod="openshift-marketplace/redhat-marketplace-t69tv" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.701745 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a47381b-bbaa-48f0-93d6-06bdd256dcc1-utilities\") pod \"redhat-marketplace-t69tv\" (UID: \"0a47381b-bbaa-48f0-93d6-06bdd256dcc1\") " pod="openshift-marketplace/redhat-marketplace-t69tv" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.719961 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhll7\" (UniqueName: \"kubernetes.io/projected/0a47381b-bbaa-48f0-93d6-06bdd256dcc1-kube-api-access-mhll7\") pod \"redhat-marketplace-t69tv\" (UID: \"0a47381b-bbaa-48f0-93d6-06bdd256dcc1\") " pod="openshift-marketplace/redhat-marketplace-t69tv" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.802295 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba06a80d-4d83-4f32-bfc3-78c15cfebfe7-catalog-content\") pod \"redhat-operators-5ll2s\" (UID: \"ba06a80d-4d83-4f32-bfc3-78c15cfebfe7\") " pod="openshift-marketplace/redhat-operators-5ll2s" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.802372 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzggr\" (UniqueName: \"kubernetes.io/projected/ba06a80d-4d83-4f32-bfc3-78c15cfebfe7-kube-api-access-vzggr\") pod \"redhat-operators-5ll2s\" (UID: \"ba06a80d-4d83-4f32-bfc3-78c15cfebfe7\") " pod="openshift-marketplace/redhat-operators-5ll2s" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.802402 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba06a80d-4d83-4f32-bfc3-78c15cfebfe7-utilities\") pod \"redhat-operators-5ll2s\" (UID: \"ba06a80d-4d83-4f32-bfc3-78c15cfebfe7\") " pod="openshift-marketplace/redhat-operators-5ll2s" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.806004 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t69tv" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.835550 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bdpcf" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.904082 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzggr\" (UniqueName: \"kubernetes.io/projected/ba06a80d-4d83-4f32-bfc3-78c15cfebfe7-kube-api-access-vzggr\") pod \"redhat-operators-5ll2s\" (UID: \"ba06a80d-4d83-4f32-bfc3-78c15cfebfe7\") " pod="openshift-marketplace/redhat-operators-5ll2s" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.904496 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba06a80d-4d83-4f32-bfc3-78c15cfebfe7-utilities\") pod \"redhat-operators-5ll2s\" (UID: \"ba06a80d-4d83-4f32-bfc3-78c15cfebfe7\") " pod="openshift-marketplace/redhat-operators-5ll2s" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.904617 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba06a80d-4d83-4f32-bfc3-78c15cfebfe7-catalog-content\") pod \"redhat-operators-5ll2s\" (UID: \"ba06a80d-4d83-4f32-bfc3-78c15cfebfe7\") " pod="openshift-marketplace/redhat-operators-5ll2s" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.905114 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba06a80d-4d83-4f32-bfc3-78c15cfebfe7-catalog-content\") pod \"redhat-operators-5ll2s\" (UID: \"ba06a80d-4d83-4f32-bfc3-78c15cfebfe7\") " pod="openshift-marketplace/redhat-operators-5ll2s" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.905271 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba06a80d-4d83-4f32-bfc3-78c15cfebfe7-utilities\") pod \"redhat-operators-5ll2s\" (UID: \"ba06a80d-4d83-4f32-bfc3-78c15cfebfe7\") " pod="openshift-marketplace/redhat-operators-5ll2s" Jan 27 13:51:14 crc kubenswrapper[4914]: I0127 13:51:14.937497 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzggr\" (UniqueName: \"kubernetes.io/projected/ba06a80d-4d83-4f32-bfc3-78c15cfebfe7-kube-api-access-vzggr\") pod \"redhat-operators-5ll2s\" (UID: \"ba06a80d-4d83-4f32-bfc3-78c15cfebfe7\") " pod="openshift-marketplace/redhat-operators-5ll2s" Jan 27 13:51:15 crc kubenswrapper[4914]: I0127 13:51:15.000660 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5ll2s" Jan 27 13:51:15 crc kubenswrapper[4914]: I0127 13:51:15.178721 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5ll2s"] Jan 27 13:51:15 crc kubenswrapper[4914]: W0127 13:51:15.183079 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba06a80d_4d83_4f32_bfc3_78c15cfebfe7.slice/crio-ea2fdc3aa29ac588c1a67eae8b677c8a30d013d7dbf202cab917bc34ef5ea4d4 WatchSource:0}: Error finding container ea2fdc3aa29ac588c1a67eae8b677c8a30d013d7dbf202cab917bc34ef5ea4d4: Status 404 returned error can't find the container with id ea2fdc3aa29ac588c1a67eae8b677c8a30d013d7dbf202cab917bc34ef5ea4d4 Jan 27 13:51:15 crc kubenswrapper[4914]: I0127 13:51:15.239937 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t69tv"] Jan 27 13:51:15 crc kubenswrapper[4914]: W0127 13:51:15.251031 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a47381b_bbaa_48f0_93d6_06bdd256dcc1.slice/crio-bd5d60f6cdcb40629c896117036c0997b288ee92cfbd42872f5af7cec11de3af WatchSource:0}: Error finding container bd5d60f6cdcb40629c896117036c0997b288ee92cfbd42872f5af7cec11de3af: Status 404 returned error can't find the container with id bd5d60f6cdcb40629c896117036c0997b288ee92cfbd42872f5af7cec11de3af Jan 27 13:51:15 crc kubenswrapper[4914]: I0127 13:51:15.837715 4914 generic.go:334] "Generic (PLEG): container finished" podID="ba06a80d-4d83-4f32-bfc3-78c15cfebfe7" containerID="6af31c47bf2fc1979ca345c4f1a932421d0c2ebc71d0438d4dcdd5e3cb42377f" exitCode=0 Jan 27 13:51:15 crc kubenswrapper[4914]: I0127 13:51:15.837807 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5ll2s" event={"ID":"ba06a80d-4d83-4f32-bfc3-78c15cfebfe7","Type":"ContainerDied","Data":"6af31c47bf2fc1979ca345c4f1a932421d0c2ebc71d0438d4dcdd5e3cb42377f"} Jan 27 13:51:15 crc kubenswrapper[4914]: I0127 13:51:15.837893 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5ll2s" event={"ID":"ba06a80d-4d83-4f32-bfc3-78c15cfebfe7","Type":"ContainerStarted","Data":"ea2fdc3aa29ac588c1a67eae8b677c8a30d013d7dbf202cab917bc34ef5ea4d4"} Jan 27 13:51:15 crc kubenswrapper[4914]: I0127 13:51:15.839552 4914 generic.go:334] "Generic (PLEG): container finished" podID="0a47381b-bbaa-48f0-93d6-06bdd256dcc1" containerID="127db3e427b1a45d94d8ded0ceb2409072aaac52738ad4f0daaa5997343e0b3c" exitCode=0 Jan 27 13:51:15 crc kubenswrapper[4914]: I0127 13:51:15.839620 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t69tv" event={"ID":"0a47381b-bbaa-48f0-93d6-06bdd256dcc1","Type":"ContainerDied","Data":"127db3e427b1a45d94d8ded0ceb2409072aaac52738ad4f0daaa5997343e0b3c"} Jan 27 13:51:15 crc kubenswrapper[4914]: I0127 13:51:15.839661 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t69tv" event={"ID":"0a47381b-bbaa-48f0-93d6-06bdd256dcc1","Type":"ContainerStarted","Data":"bd5d60f6cdcb40629c896117036c0997b288ee92cfbd42872f5af7cec11de3af"} Jan 27 13:51:16 crc kubenswrapper[4914]: I0127 13:51:16.877745 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bj7ql"] Jan 27 13:51:16 crc kubenswrapper[4914]: I0127 13:51:16.879246 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bj7ql" Jan 27 13:51:16 crc kubenswrapper[4914]: I0127 13:51:16.882165 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 13:51:16 crc kubenswrapper[4914]: I0127 13:51:16.931806 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e3c9d63-36bb-4aee-a87f-f95798649571-utilities\") pod \"community-operators-bj7ql\" (UID: \"8e3c9d63-36bb-4aee-a87f-f95798649571\") " pod="openshift-marketplace/community-operators-bj7ql" Jan 27 13:51:16 crc kubenswrapper[4914]: I0127 13:51:16.931903 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e3c9d63-36bb-4aee-a87f-f95798649571-catalog-content\") pod \"community-operators-bj7ql\" (UID: \"8e3c9d63-36bb-4aee-a87f-f95798649571\") " pod="openshift-marketplace/community-operators-bj7ql" Jan 27 13:51:16 crc kubenswrapper[4914]: I0127 13:51:16.931951 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbb8v\" (UniqueName: \"kubernetes.io/projected/8e3c9d63-36bb-4aee-a87f-f95798649571-kube-api-access-kbb8v\") pod \"community-operators-bj7ql\" (UID: \"8e3c9d63-36bb-4aee-a87f-f95798649571\") " pod="openshift-marketplace/community-operators-bj7ql" Jan 27 13:51:16 crc kubenswrapper[4914]: I0127 13:51:16.951999 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bj7ql"] Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.033683 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e3c9d63-36bb-4aee-a87f-f95798649571-catalog-content\") pod \"community-operators-bj7ql\" (UID: \"8e3c9d63-36bb-4aee-a87f-f95798649571\") " pod="openshift-marketplace/community-operators-bj7ql" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.033997 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbb8v\" (UniqueName: \"kubernetes.io/projected/8e3c9d63-36bb-4aee-a87f-f95798649571-kube-api-access-kbb8v\") pod \"community-operators-bj7ql\" (UID: \"8e3c9d63-36bb-4aee-a87f-f95798649571\") " pod="openshift-marketplace/community-operators-bj7ql" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.034123 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e3c9d63-36bb-4aee-a87f-f95798649571-catalog-content\") pod \"community-operators-bj7ql\" (UID: \"8e3c9d63-36bb-4aee-a87f-f95798649571\") " pod="openshift-marketplace/community-operators-bj7ql" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.034208 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e3c9d63-36bb-4aee-a87f-f95798649571-utilities\") pod \"community-operators-bj7ql\" (UID: \"8e3c9d63-36bb-4aee-a87f-f95798649571\") " pod="openshift-marketplace/community-operators-bj7ql" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.034492 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e3c9d63-36bb-4aee-a87f-f95798649571-utilities\") pod \"community-operators-bj7ql\" (UID: \"8e3c9d63-36bb-4aee-a87f-f95798649571\") " pod="openshift-marketplace/community-operators-bj7ql" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.053167 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbb8v\" (UniqueName: \"kubernetes.io/projected/8e3c9d63-36bb-4aee-a87f-f95798649571-kube-api-access-kbb8v\") pod \"community-operators-bj7ql\" (UID: \"8e3c9d63-36bb-4aee-a87f-f95798649571\") " pod="openshift-marketplace/community-operators-bj7ql" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.096493 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l52ml"] Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.097734 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l52ml" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.100495 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.135103 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cd8f086-fe10-40d4-a520-fbe48482af35-utilities\") pod \"certified-operators-l52ml\" (UID: \"3cd8f086-fe10-40d4-a520-fbe48482af35\") " pod="openshift-marketplace/certified-operators-l52ml" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.135328 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plsb2\" (UniqueName: \"kubernetes.io/projected/3cd8f086-fe10-40d4-a520-fbe48482af35-kube-api-access-plsb2\") pod \"certified-operators-l52ml\" (UID: \"3cd8f086-fe10-40d4-a520-fbe48482af35\") " pod="openshift-marketplace/certified-operators-l52ml" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.135552 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cd8f086-fe10-40d4-a520-fbe48482af35-catalog-content\") pod \"certified-operators-l52ml\" (UID: \"3cd8f086-fe10-40d4-a520-fbe48482af35\") " pod="openshift-marketplace/certified-operators-l52ml" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.160304 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l52ml"] Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.200594 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bj7ql" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.236702 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plsb2\" (UniqueName: \"kubernetes.io/projected/3cd8f086-fe10-40d4-a520-fbe48482af35-kube-api-access-plsb2\") pod \"certified-operators-l52ml\" (UID: \"3cd8f086-fe10-40d4-a520-fbe48482af35\") " pod="openshift-marketplace/certified-operators-l52ml" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.236793 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cd8f086-fe10-40d4-a520-fbe48482af35-catalog-content\") pod \"certified-operators-l52ml\" (UID: \"3cd8f086-fe10-40d4-a520-fbe48482af35\") " pod="openshift-marketplace/certified-operators-l52ml" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.236819 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cd8f086-fe10-40d4-a520-fbe48482af35-utilities\") pod \"certified-operators-l52ml\" (UID: \"3cd8f086-fe10-40d4-a520-fbe48482af35\") " pod="openshift-marketplace/certified-operators-l52ml" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.237472 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cd8f086-fe10-40d4-a520-fbe48482af35-utilities\") pod \"certified-operators-l52ml\" (UID: \"3cd8f086-fe10-40d4-a520-fbe48482af35\") " pod="openshift-marketplace/certified-operators-l52ml" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.237576 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cd8f086-fe10-40d4-a520-fbe48482af35-catalog-content\") pod \"certified-operators-l52ml\" (UID: \"3cd8f086-fe10-40d4-a520-fbe48482af35\") " pod="openshift-marketplace/certified-operators-l52ml" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.258202 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plsb2\" (UniqueName: \"kubernetes.io/projected/3cd8f086-fe10-40d4-a520-fbe48482af35-kube-api-access-plsb2\") pod \"certified-operators-l52ml\" (UID: \"3cd8f086-fe10-40d4-a520-fbe48482af35\") " pod="openshift-marketplace/certified-operators-l52ml" Jan 27 13:51:17 crc kubenswrapper[4914]: I0127 13:51:17.414297 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l52ml" Jan 27 13:51:18 crc kubenswrapper[4914]: I0127 13:51:18.191402 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bj7ql"] Jan 27 13:51:18 crc kubenswrapper[4914]: W0127 13:51:18.195651 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e3c9d63_36bb_4aee_a87f_f95798649571.slice/crio-5589150a0c1f04616d3a89f1dd36505d9fd22b44040dba7c156deca99da62cba WatchSource:0}: Error finding container 5589150a0c1f04616d3a89f1dd36505d9fd22b44040dba7c156deca99da62cba: Status 404 returned error can't find the container with id 5589150a0c1f04616d3a89f1dd36505d9fd22b44040dba7c156deca99da62cba Jan 27 13:51:18 crc kubenswrapper[4914]: W0127 13:51:18.351429 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cd8f086_fe10_40d4_a520_fbe48482af35.slice/crio-7806ff7afc4d1b885fbfaba725060002f676e16f918da04a80dd59aa9eac48bd WatchSource:0}: Error finding container 7806ff7afc4d1b885fbfaba725060002f676e16f918da04a80dd59aa9eac48bd: Status 404 returned error can't find the container with id 7806ff7afc4d1b885fbfaba725060002f676e16f918da04a80dd59aa9eac48bd Jan 27 13:51:18 crc kubenswrapper[4914]: I0127 13:51:18.354547 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l52ml"] Jan 27 13:51:18 crc kubenswrapper[4914]: I0127 13:51:18.862393 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5ll2s" event={"ID":"ba06a80d-4d83-4f32-bfc3-78c15cfebfe7","Type":"ContainerStarted","Data":"dbb7cf1fee973593efdf43da3118c22211abd48d888c67c52e7c6064be3b27be"} Jan 27 13:51:18 crc kubenswrapper[4914]: I0127 13:51:18.876067 4914 generic.go:334] "Generic (PLEG): container finished" podID="3cd8f086-fe10-40d4-a520-fbe48482af35" containerID="a07894d475f27a8ef59422c1e594dfd50f55e8de2e18d7e8f4a9caab1df812eb" exitCode=0 Jan 27 13:51:18 crc kubenswrapper[4914]: I0127 13:51:18.876188 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l52ml" event={"ID":"3cd8f086-fe10-40d4-a520-fbe48482af35","Type":"ContainerDied","Data":"a07894d475f27a8ef59422c1e594dfd50f55e8de2e18d7e8f4a9caab1df812eb"} Jan 27 13:51:18 crc kubenswrapper[4914]: I0127 13:51:18.876221 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l52ml" event={"ID":"3cd8f086-fe10-40d4-a520-fbe48482af35","Type":"ContainerStarted","Data":"7806ff7afc4d1b885fbfaba725060002f676e16f918da04a80dd59aa9eac48bd"} Jan 27 13:51:18 crc kubenswrapper[4914]: I0127 13:51:18.878771 4914 generic.go:334] "Generic (PLEG): container finished" podID="0a47381b-bbaa-48f0-93d6-06bdd256dcc1" containerID="550a5317bb1324e873a2bf8cc937896660866140644ebc7ab9dacd5a01f6667b" exitCode=0 Jan 27 13:51:18 crc kubenswrapper[4914]: I0127 13:51:18.878860 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t69tv" event={"ID":"0a47381b-bbaa-48f0-93d6-06bdd256dcc1","Type":"ContainerDied","Data":"550a5317bb1324e873a2bf8cc937896660866140644ebc7ab9dacd5a01f6667b"} Jan 27 13:51:18 crc kubenswrapper[4914]: I0127 13:51:18.889944 4914 generic.go:334] "Generic (PLEG): container finished" podID="8e3c9d63-36bb-4aee-a87f-f95798649571" containerID="15bb87fa361b48e266bd39c08a712e924f4a8a34e38873d33aa3fa656c587d54" exitCode=0 Jan 27 13:51:18 crc kubenswrapper[4914]: I0127 13:51:18.890102 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj7ql" event={"ID":"8e3c9d63-36bb-4aee-a87f-f95798649571","Type":"ContainerDied","Data":"15bb87fa361b48e266bd39c08a712e924f4a8a34e38873d33aa3fa656c587d54"} Jan 27 13:51:18 crc kubenswrapper[4914]: I0127 13:51:18.890205 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj7ql" event={"ID":"8e3c9d63-36bb-4aee-a87f-f95798649571","Type":"ContainerStarted","Data":"5589150a0c1f04616d3a89f1dd36505d9fd22b44040dba7c156deca99da62cba"} Jan 27 13:51:19 crc kubenswrapper[4914]: I0127 13:51:19.896724 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t69tv" event={"ID":"0a47381b-bbaa-48f0-93d6-06bdd256dcc1","Type":"ContainerStarted","Data":"fb47322e47e631fbc011d987c6b7d6b77acc9fd170d93a6f6111530fac7f8032"} Jan 27 13:51:19 crc kubenswrapper[4914]: I0127 13:51:19.898267 4914 generic.go:334] "Generic (PLEG): container finished" podID="ba06a80d-4d83-4f32-bfc3-78c15cfebfe7" containerID="dbb7cf1fee973593efdf43da3118c22211abd48d888c67c52e7c6064be3b27be" exitCode=0 Jan 27 13:51:19 crc kubenswrapper[4914]: I0127 13:51:19.898304 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5ll2s" event={"ID":"ba06a80d-4d83-4f32-bfc3-78c15cfebfe7","Type":"ContainerDied","Data":"dbb7cf1fee973593efdf43da3118c22211abd48d888c67c52e7c6064be3b27be"} Jan 27 13:51:19 crc kubenswrapper[4914]: I0127 13:51:19.916696 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t69tv" podStartSLOduration=2.431976847 podStartE2EDuration="5.916677497s" podCreationTimestamp="2026-01-27 13:51:14 +0000 UTC" firstStartedPulling="2026-01-27 13:51:15.84097649 +0000 UTC m=+434.153326575" lastFinishedPulling="2026-01-27 13:51:19.32567714 +0000 UTC m=+437.638027225" observedRunningTime="2026-01-27 13:51:19.916366138 +0000 UTC m=+438.228716233" watchObservedRunningTime="2026-01-27 13:51:19.916677497 +0000 UTC m=+438.229027582" Jan 27 13:51:20 crc kubenswrapper[4914]: I0127 13:51:20.905929 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5ll2s" event={"ID":"ba06a80d-4d83-4f32-bfc3-78c15cfebfe7","Type":"ContainerStarted","Data":"26aaa7b8270fd22feeaa43706e14344f997997f448a9f0b0fe3b76fe60520d67"} Jan 27 13:51:20 crc kubenswrapper[4914]: I0127 13:51:20.907950 4914 generic.go:334] "Generic (PLEG): container finished" podID="3cd8f086-fe10-40d4-a520-fbe48482af35" containerID="0d0dd34bea36d441aebd2f13585fde58b6371db7710c12a11318bba420d02904" exitCode=0 Jan 27 13:51:20 crc kubenswrapper[4914]: I0127 13:51:20.907998 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l52ml" event={"ID":"3cd8f086-fe10-40d4-a520-fbe48482af35","Type":"ContainerDied","Data":"0d0dd34bea36d441aebd2f13585fde58b6371db7710c12a11318bba420d02904"} Jan 27 13:51:20 crc kubenswrapper[4914]: I0127 13:51:20.930261 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5ll2s" podStartSLOduration=2.469154818 podStartE2EDuration="6.930244722s" podCreationTimestamp="2026-01-27 13:51:14 +0000 UTC" firstStartedPulling="2026-01-27 13:51:15.839904671 +0000 UTC m=+434.152254756" lastFinishedPulling="2026-01-27 13:51:20.300994575 +0000 UTC m=+438.613344660" observedRunningTime="2026-01-27 13:51:20.929491111 +0000 UTC m=+439.241841206" watchObservedRunningTime="2026-01-27 13:51:20.930244722 +0000 UTC m=+439.242594797" Jan 27 13:51:21 crc kubenswrapper[4914]: I0127 13:51:21.914454 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l52ml" event={"ID":"3cd8f086-fe10-40d4-a520-fbe48482af35","Type":"ContainerStarted","Data":"664ebdb7f31aca1c6a0f4ecf590f0245e1d0e3ed20fae6b087dfc1558f9053a9"} Jan 27 13:51:21 crc kubenswrapper[4914]: I0127 13:51:21.932046 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l52ml" podStartSLOduration=2.446862173 podStartE2EDuration="4.932027134s" podCreationTimestamp="2026-01-27 13:51:17 +0000 UTC" firstStartedPulling="2026-01-27 13:51:18.8778902 +0000 UTC m=+437.190240285" lastFinishedPulling="2026-01-27 13:51:21.363055161 +0000 UTC m=+439.675405246" observedRunningTime="2026-01-27 13:51:21.92935793 +0000 UTC m=+440.241708005" watchObservedRunningTime="2026-01-27 13:51:21.932027134 +0000 UTC m=+440.244377219" Jan 27 13:51:24 crc kubenswrapper[4914]: I0127 13:51:24.807180 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t69tv" Jan 27 13:51:24 crc kubenswrapper[4914]: I0127 13:51:24.807842 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t69tv" Jan 27 13:51:24 crc kubenswrapper[4914]: I0127 13:51:24.855269 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t69tv" Jan 27 13:51:24 crc kubenswrapper[4914]: I0127 13:51:24.929301 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj7ql" event={"ID":"8e3c9d63-36bb-4aee-a87f-f95798649571","Type":"ContainerStarted","Data":"8f11f2e5b4e52f6ba4c3fd703f850c531842b43c61d79d121ee6bb09cf189db6"} Jan 27 13:51:24 crc kubenswrapper[4914]: I0127 13:51:24.967315 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t69tv" Jan 27 13:51:25 crc kubenswrapper[4914]: I0127 13:51:25.011232 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5ll2s" Jan 27 13:51:25 crc kubenswrapper[4914]: I0127 13:51:25.011278 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5ll2s" Jan 27 13:51:25 crc kubenswrapper[4914]: I0127 13:51:25.935740 4914 generic.go:334] "Generic (PLEG): container finished" podID="8e3c9d63-36bb-4aee-a87f-f95798649571" containerID="8f11f2e5b4e52f6ba4c3fd703f850c531842b43c61d79d121ee6bb09cf189db6" exitCode=0 Jan 27 13:51:25 crc kubenswrapper[4914]: I0127 13:51:25.936863 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj7ql" event={"ID":"8e3c9d63-36bb-4aee-a87f-f95798649571","Type":"ContainerDied","Data":"8f11f2e5b4e52f6ba4c3fd703f850c531842b43c61d79d121ee6bb09cf189db6"} Jan 27 13:51:26 crc kubenswrapper[4914]: I0127 13:51:26.053471 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5ll2s" podUID="ba06a80d-4d83-4f32-bfc3-78c15cfebfe7" containerName="registry-server" probeResult="failure" output=< Jan 27 13:51:26 crc kubenswrapper[4914]: timeout: failed to connect service ":50051" within 1s Jan 27 13:51:26 crc kubenswrapper[4914]: > Jan 27 13:51:26 crc kubenswrapper[4914]: I0127 13:51:26.943557 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj7ql" event={"ID":"8e3c9d63-36bb-4aee-a87f-f95798649571","Type":"ContainerStarted","Data":"8f9a2be9f46b5ce429176a6f2849bcfc1c4c8405213eaeca7a32afa88982f6cb"} Jan 27 13:51:26 crc kubenswrapper[4914]: I0127 13:51:26.962705 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bj7ql" podStartSLOduration=3.203461148 podStartE2EDuration="10.962684867s" podCreationTimestamp="2026-01-27 13:51:16 +0000 UTC" firstStartedPulling="2026-01-27 13:51:18.892490441 +0000 UTC m=+437.204840526" lastFinishedPulling="2026-01-27 13:51:26.65171416 +0000 UTC m=+444.964064245" observedRunningTime="2026-01-27 13:51:26.958991686 +0000 UTC m=+445.271341771" watchObservedRunningTime="2026-01-27 13:51:26.962684867 +0000 UTC m=+445.275034952" Jan 27 13:51:27 crc kubenswrapper[4914]: I0127 13:51:27.200956 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bj7ql" Jan 27 13:51:27 crc kubenswrapper[4914]: I0127 13:51:27.201151 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bj7ql" Jan 27 13:51:27 crc kubenswrapper[4914]: I0127 13:51:27.415338 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l52ml" Jan 27 13:51:27 crc kubenswrapper[4914]: I0127 13:51:27.415380 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l52ml" Jan 27 13:51:27 crc kubenswrapper[4914]: I0127 13:51:27.457963 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l52ml" Jan 27 13:51:27 crc kubenswrapper[4914]: I0127 13:51:27.986309 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l52ml" Jan 27 13:51:28 crc kubenswrapper[4914]: I0127 13:51:28.235566 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bj7ql" podUID="8e3c9d63-36bb-4aee-a87f-f95798649571" containerName="registry-server" probeResult="failure" output=< Jan 27 13:51:28 crc kubenswrapper[4914]: timeout: failed to connect service ":50051" within 1s Jan 27 13:51:28 crc kubenswrapper[4914]: > Jan 27 13:51:35 crc kubenswrapper[4914]: I0127 13:51:35.043113 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5ll2s" Jan 27 13:51:35 crc kubenswrapper[4914]: I0127 13:51:35.085402 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5ll2s" Jan 27 13:51:37 crc kubenswrapper[4914]: I0127 13:51:37.243944 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bj7ql" Jan 27 13:51:37 crc kubenswrapper[4914]: I0127 13:51:37.287487 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bj7ql" Jan 27 13:53:02 crc kubenswrapper[4914]: I0127 13:53:02.494903 4914 scope.go:117] "RemoveContainer" containerID="ac1e0d17b1e3228aa363cbb9dec5bbfa029f5d49fa96a14439884f0313108016" Jan 27 13:53:07 crc kubenswrapper[4914]: I0127 13:53:07.691399 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:53:07 crc kubenswrapper[4914]: I0127 13:53:07.692076 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:53:37 crc kubenswrapper[4914]: I0127 13:53:37.690916 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:53:37 crc kubenswrapper[4914]: I0127 13:53:37.691519 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.170860 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-m6grx"] Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.172522 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.194120 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-m6grx"] Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.359956 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97741ddf-e19f-4080-8e58-1d3df7b63a97-ca-trust-extracted\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.360073 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97741ddf-e19f-4080-8e58-1d3df7b63a97-registry-tls\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.360148 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97741ddf-e19f-4080-8e58-1d3df7b63a97-registry-certificates\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.360400 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97741ddf-e19f-4080-8e58-1d3df7b63a97-trusted-ca\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.360486 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqfmw\" (UniqueName: \"kubernetes.io/projected/97741ddf-e19f-4080-8e58-1d3df7b63a97-kube-api-access-qqfmw\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.360523 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97741ddf-e19f-4080-8e58-1d3df7b63a97-bound-sa-token\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.360619 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.360760 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97741ddf-e19f-4080-8e58-1d3df7b63a97-installation-pull-secrets\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.382624 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.461630 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97741ddf-e19f-4080-8e58-1d3df7b63a97-installation-pull-secrets\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.461703 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97741ddf-e19f-4080-8e58-1d3df7b63a97-ca-trust-extracted\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.461734 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97741ddf-e19f-4080-8e58-1d3df7b63a97-registry-tls\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.461778 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97741ddf-e19f-4080-8e58-1d3df7b63a97-registry-certificates\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.461867 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97741ddf-e19f-4080-8e58-1d3df7b63a97-trusted-ca\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.461890 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqfmw\" (UniqueName: \"kubernetes.io/projected/97741ddf-e19f-4080-8e58-1d3df7b63a97-kube-api-access-qqfmw\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.461911 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97741ddf-e19f-4080-8e58-1d3df7b63a97-bound-sa-token\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.463231 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97741ddf-e19f-4080-8e58-1d3df7b63a97-ca-trust-extracted\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.465091 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97741ddf-e19f-4080-8e58-1d3df7b63a97-registry-certificates\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.467024 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97741ddf-e19f-4080-8e58-1d3df7b63a97-trusted-ca\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.468444 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97741ddf-e19f-4080-8e58-1d3df7b63a97-installation-pull-secrets\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.473938 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97741ddf-e19f-4080-8e58-1d3df7b63a97-registry-tls\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.481502 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqfmw\" (UniqueName: \"kubernetes.io/projected/97741ddf-e19f-4080-8e58-1d3df7b63a97-kube-api-access-qqfmw\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.487456 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97741ddf-e19f-4080-8e58-1d3df7b63a97-bound-sa-token\") pod \"image-registry-66df7c8f76-m6grx\" (UID: \"97741ddf-e19f-4080-8e58-1d3df7b63a97\") " pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.491393 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:02 crc kubenswrapper[4914]: I0127 13:54:02.696864 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-m6grx"] Jan 27 13:54:03 crc kubenswrapper[4914]: I0127 13:54:03.663864 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" event={"ID":"97741ddf-e19f-4080-8e58-1d3df7b63a97","Type":"ContainerStarted","Data":"4bb531c3abc4e03b6d187b9a21dea582b559633c8c41b5752483140afc97aa3e"} Jan 27 13:54:03 crc kubenswrapper[4914]: I0127 13:54:03.663935 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" event={"ID":"97741ddf-e19f-4080-8e58-1d3df7b63a97","Type":"ContainerStarted","Data":"3c088c6f6024feb7bee421a2317aa4ed08b17b8013976399d27da52ff0741587"} Jan 27 13:54:03 crc kubenswrapper[4914]: I0127 13:54:03.663986 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:03 crc kubenswrapper[4914]: I0127 13:54:03.694244 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" podStartSLOduration=1.694226704 podStartE2EDuration="1.694226704s" podCreationTimestamp="2026-01-27 13:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:54:03.691122446 +0000 UTC m=+602.003472551" watchObservedRunningTime="2026-01-27 13:54:03.694226704 +0000 UTC m=+602.006576789" Jan 27 13:54:07 crc kubenswrapper[4914]: I0127 13:54:07.691183 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:54:07 crc kubenswrapper[4914]: I0127 13:54:07.692903 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:54:07 crc kubenswrapper[4914]: I0127 13:54:07.692954 4914 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:54:07 crc kubenswrapper[4914]: I0127 13:54:07.693423 4914 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4133f9318dfd6995faf0a783971bc157faacba434797a9f650945b1d398d1c43"} pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:54:07 crc kubenswrapper[4914]: I0127 13:54:07.693472 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" containerID="cri-o://4133f9318dfd6995faf0a783971bc157faacba434797a9f650945b1d398d1c43" gracePeriod=600 Jan 27 13:54:08 crc kubenswrapper[4914]: I0127 13:54:08.694606 4914 generic.go:334] "Generic (PLEG): container finished" podID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerID="4133f9318dfd6995faf0a783971bc157faacba434797a9f650945b1d398d1c43" exitCode=0 Jan 27 13:54:08 crc kubenswrapper[4914]: I0127 13:54:08.694671 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerDied","Data":"4133f9318dfd6995faf0a783971bc157faacba434797a9f650945b1d398d1c43"} Jan 27 13:54:08 crc kubenswrapper[4914]: I0127 13:54:08.695016 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerStarted","Data":"37dc1ebac0798b9157fdcba67221027c4ccb916dafe67ba310b9792bc6166b37"} Jan 27 13:54:08 crc kubenswrapper[4914]: I0127 13:54:08.695050 4914 scope.go:117] "RemoveContainer" containerID="95915b10dd9749a36c926b1d56b1160495b6cbef34f668e33fde194019445d27" Jan 27 13:54:22 crc kubenswrapper[4914]: I0127 13:54:22.497561 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-m6grx" Jan 27 13:54:22 crc kubenswrapper[4914]: I0127 13:54:22.556519 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rhxcj"] Jan 27 13:54:47 crc kubenswrapper[4914]: I0127 13:54:47.598191 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" podUID="df3e61ea-86a0-416e-9e24-d90241f6a543" containerName="registry" containerID="cri-o://eff2c746c1ff87a6b07bbbbebd90376e42cecbe25ac5eb5d5a483cf6d94d0c93" gracePeriod=30 Jan 27 13:54:47 crc kubenswrapper[4914]: I0127 13:54:47.899014 4914 generic.go:334] "Generic (PLEG): container finished" podID="df3e61ea-86a0-416e-9e24-d90241f6a543" containerID="eff2c746c1ff87a6b07bbbbebd90376e42cecbe25ac5eb5d5a483cf6d94d0c93" exitCode=0 Jan 27 13:54:47 crc kubenswrapper[4914]: I0127 13:54:47.899058 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" event={"ID":"df3e61ea-86a0-416e-9e24-d90241f6a543","Type":"ContainerDied","Data":"eff2c746c1ff87a6b07bbbbebd90376e42cecbe25ac5eb5d5a483cf6d94d0c93"} Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.036332 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.108159 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-bound-sa-token\") pod \"df3e61ea-86a0-416e-9e24-d90241f6a543\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.108240 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df3e61ea-86a0-416e-9e24-d90241f6a543-registry-certificates\") pod \"df3e61ea-86a0-416e-9e24-d90241f6a543\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.108300 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwjt4\" (UniqueName: \"kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-kube-api-access-zwjt4\") pod \"df3e61ea-86a0-416e-9e24-d90241f6a543\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.108326 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-registry-tls\") pod \"df3e61ea-86a0-416e-9e24-d90241f6a543\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.108356 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df3e61ea-86a0-416e-9e24-d90241f6a543-ca-trust-extracted\") pod \"df3e61ea-86a0-416e-9e24-d90241f6a543\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.108406 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df3e61ea-86a0-416e-9e24-d90241f6a543-installation-pull-secrets\") pod \"df3e61ea-86a0-416e-9e24-d90241f6a543\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.108623 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"df3e61ea-86a0-416e-9e24-d90241f6a543\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.108686 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df3e61ea-86a0-416e-9e24-d90241f6a543-trusted-ca\") pod \"df3e61ea-86a0-416e-9e24-d90241f6a543\" (UID: \"df3e61ea-86a0-416e-9e24-d90241f6a543\") " Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.109463 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3e61ea-86a0-416e-9e24-d90241f6a543-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "df3e61ea-86a0-416e-9e24-d90241f6a543" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.109497 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3e61ea-86a0-416e-9e24-d90241f6a543-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "df3e61ea-86a0-416e-9e24-d90241f6a543" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.115308 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "df3e61ea-86a0-416e-9e24-d90241f6a543" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.115422 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df3e61ea-86a0-416e-9e24-d90241f6a543-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "df3e61ea-86a0-416e-9e24-d90241f6a543" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.116564 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-kube-api-access-zwjt4" (OuterVolumeSpecName: "kube-api-access-zwjt4") pod "df3e61ea-86a0-416e-9e24-d90241f6a543" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543"). InnerVolumeSpecName "kube-api-access-zwjt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.116711 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "df3e61ea-86a0-416e-9e24-d90241f6a543" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.125767 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "df3e61ea-86a0-416e-9e24-d90241f6a543" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.134228 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df3e61ea-86a0-416e-9e24-d90241f6a543-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "df3e61ea-86a0-416e-9e24-d90241f6a543" (UID: "df3e61ea-86a0-416e-9e24-d90241f6a543"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.210588 4914 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df3e61ea-86a0-416e-9e24-d90241f6a543-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.210837 4914 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df3e61ea-86a0-416e-9e24-d90241f6a543-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.210939 4914 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.211001 4914 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df3e61ea-86a0-416e-9e24-d90241f6a543-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.211089 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwjt4\" (UniqueName: \"kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-kube-api-access-zwjt4\") on node \"crc\" DevicePath \"\"" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.211204 4914 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df3e61ea-86a0-416e-9e24-d90241f6a543-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.211258 4914 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df3e61ea-86a0-416e-9e24-d90241f6a543-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.911466 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" event={"ID":"df3e61ea-86a0-416e-9e24-d90241f6a543","Type":"ContainerDied","Data":"f9dc79d613981944c31a9f83d55c964d0c0b549257ce924b5ac9499c7b32d2b1"} Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.911527 4914 scope.go:117] "RemoveContainer" containerID="eff2c746c1ff87a6b07bbbbebd90376e42cecbe25ac5eb5d5a483cf6d94d0c93" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.911559 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rhxcj" Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.935313 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rhxcj"] Jan 27 13:54:48 crc kubenswrapper[4914]: I0127 13:54:48.939377 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rhxcj"] Jan 27 13:54:50 crc kubenswrapper[4914]: I0127 13:54:50.302487 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df3e61ea-86a0-416e-9e24-d90241f6a543" path="/var/lib/kubelet/pods/df3e61ea-86a0-416e-9e24-d90241f6a543/volumes" Jan 27 13:55:02 crc kubenswrapper[4914]: I0127 13:55:02.568267 4914 scope.go:117] "RemoveContainer" containerID="f8df7c6da3d1435d1d9ffcdcb978c7edfe5e4e53efe3366e02857d60e4caf230" Jan 27 13:55:02 crc kubenswrapper[4914]: I0127 13:55:02.585622 4914 scope.go:117] "RemoveContainer" containerID="3d9cf990ae03248d15e2de8b3ff1faf2094ac36cdd545a3098025209ce754a03" Jan 27 13:56:07 crc kubenswrapper[4914]: I0127 13:56:07.691109 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:56:07 crc kubenswrapper[4914]: I0127 13:56:07.691771 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:56:37 crc kubenswrapper[4914]: I0127 13:56:37.690550 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:56:37 crc kubenswrapper[4914]: I0127 13:56:37.691117 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:56:44 crc kubenswrapper[4914]: I0127 13:56:44.904033 4914 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 13:57:07 crc kubenswrapper[4914]: I0127 13:57:07.690899 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:57:07 crc kubenswrapper[4914]: I0127 13:57:07.691465 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:57:07 crc kubenswrapper[4914]: I0127 13:57:07.691521 4914 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 13:57:07 crc kubenswrapper[4914]: I0127 13:57:07.692094 4914 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37dc1ebac0798b9157fdcba67221027c4ccb916dafe67ba310b9792bc6166b37"} pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:57:07 crc kubenswrapper[4914]: I0127 13:57:07.692150 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" containerID="cri-o://37dc1ebac0798b9157fdcba67221027c4ccb916dafe67ba310b9792bc6166b37" gracePeriod=600 Jan 27 13:57:08 crc kubenswrapper[4914]: I0127 13:57:08.624192 4914 generic.go:334] "Generic (PLEG): container finished" podID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerID="37dc1ebac0798b9157fdcba67221027c4ccb916dafe67ba310b9792bc6166b37" exitCode=0 Jan 27 13:57:08 crc kubenswrapper[4914]: I0127 13:57:08.624237 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerDied","Data":"37dc1ebac0798b9157fdcba67221027c4ccb916dafe67ba310b9792bc6166b37"} Jan 27 13:57:08 crc kubenswrapper[4914]: I0127 13:57:08.624615 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerStarted","Data":"a0fd1f806130ae08db1dd18a20f06b6fe85e397f8f5aa2658045094f139caa41"} Jan 27 13:57:08 crc kubenswrapper[4914]: I0127 13:57:08.624647 4914 scope.go:117] "RemoveContainer" containerID="4133f9318dfd6995faf0a783971bc157faacba434797a9f650945b1d398d1c43" Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.911276 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jcblw"] Jan 27 13:57:56 crc kubenswrapper[4914]: E0127 13:57:56.912000 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df3e61ea-86a0-416e-9e24-d90241f6a543" containerName="registry" Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.912015 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="df3e61ea-86a0-416e-9e24-d90241f6a543" containerName="registry" Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.912133 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="df3e61ea-86a0-416e-9e24-d90241f6a543" containerName="registry" Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.912583 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jcblw" Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.913931 4914 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-svqqw" Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.915880 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.916780 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.926474 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jcblw"] Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.928660 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-pszcq"] Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.929283 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-pszcq" Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.932098 4914 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-b8hpq" Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.947480 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-pszcq"] Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.952636 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-9tnxd"] Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.953295 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-9tnxd" Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.955530 4914 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-djckq" Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.971687 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-9tnxd"] Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.985327 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhm86\" (UniqueName: \"kubernetes.io/projected/3f8a8e0d-b50d-48d2-b899-a22eef5a2253-kube-api-access-hhm86\") pod \"cert-manager-webhook-687f57d79b-9tnxd\" (UID: \"3f8a8e0d-b50d-48d2-b899-a22eef5a2253\") " pod="cert-manager/cert-manager-webhook-687f57d79b-9tnxd" Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.985441 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7rfq\" (UniqueName: \"kubernetes.io/projected/f1f5b8cb-4004-4bab-89c7-f730f69d8ca2-kube-api-access-d7rfq\") pod \"cert-manager-858654f9db-pszcq\" (UID: \"f1f5b8cb-4004-4bab-89c7-f730f69d8ca2\") " pod="cert-manager/cert-manager-858654f9db-pszcq" Jan 27 13:57:56 crc kubenswrapper[4914]: I0127 13:57:56.985521 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh69l\" (UniqueName: \"kubernetes.io/projected/d3dde8fe-073a-4757-ae55-141e026db3ba-kube-api-access-xh69l\") pod \"cert-manager-cainjector-cf98fcc89-jcblw\" (UID: \"d3dde8fe-073a-4757-ae55-141e026db3ba\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jcblw" Jan 27 13:57:57 crc kubenswrapper[4914]: I0127 13:57:57.086511 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhm86\" (UniqueName: \"kubernetes.io/projected/3f8a8e0d-b50d-48d2-b899-a22eef5a2253-kube-api-access-hhm86\") pod \"cert-manager-webhook-687f57d79b-9tnxd\" (UID: \"3f8a8e0d-b50d-48d2-b899-a22eef5a2253\") " pod="cert-manager/cert-manager-webhook-687f57d79b-9tnxd" Jan 27 13:57:57 crc kubenswrapper[4914]: I0127 13:57:57.086602 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7rfq\" (UniqueName: \"kubernetes.io/projected/f1f5b8cb-4004-4bab-89c7-f730f69d8ca2-kube-api-access-d7rfq\") pod \"cert-manager-858654f9db-pszcq\" (UID: \"f1f5b8cb-4004-4bab-89c7-f730f69d8ca2\") " pod="cert-manager/cert-manager-858654f9db-pszcq" Jan 27 13:57:57 crc kubenswrapper[4914]: I0127 13:57:57.086627 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh69l\" (UniqueName: \"kubernetes.io/projected/d3dde8fe-073a-4757-ae55-141e026db3ba-kube-api-access-xh69l\") pod \"cert-manager-cainjector-cf98fcc89-jcblw\" (UID: \"d3dde8fe-073a-4757-ae55-141e026db3ba\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jcblw" Jan 27 13:57:57 crc kubenswrapper[4914]: I0127 13:57:57.104736 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7rfq\" (UniqueName: \"kubernetes.io/projected/f1f5b8cb-4004-4bab-89c7-f730f69d8ca2-kube-api-access-d7rfq\") pod \"cert-manager-858654f9db-pszcq\" (UID: \"f1f5b8cb-4004-4bab-89c7-f730f69d8ca2\") " pod="cert-manager/cert-manager-858654f9db-pszcq" Jan 27 13:57:57 crc kubenswrapper[4914]: I0127 13:57:57.104826 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh69l\" (UniqueName: \"kubernetes.io/projected/d3dde8fe-073a-4757-ae55-141e026db3ba-kube-api-access-xh69l\") pod \"cert-manager-cainjector-cf98fcc89-jcblw\" (UID: \"d3dde8fe-073a-4757-ae55-141e026db3ba\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jcblw" Jan 27 13:57:57 crc kubenswrapper[4914]: I0127 13:57:57.105303 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhm86\" (UniqueName: \"kubernetes.io/projected/3f8a8e0d-b50d-48d2-b899-a22eef5a2253-kube-api-access-hhm86\") pod \"cert-manager-webhook-687f57d79b-9tnxd\" (UID: \"3f8a8e0d-b50d-48d2-b899-a22eef5a2253\") " pod="cert-manager/cert-manager-webhook-687f57d79b-9tnxd" Jan 27 13:57:57 crc kubenswrapper[4914]: I0127 13:57:57.230169 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jcblw" Jan 27 13:57:57 crc kubenswrapper[4914]: I0127 13:57:57.245034 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-pszcq" Jan 27 13:57:57 crc kubenswrapper[4914]: I0127 13:57:57.268720 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-9tnxd" Jan 27 13:57:57 crc kubenswrapper[4914]: I0127 13:57:57.539147 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-9tnxd"] Jan 27 13:57:57 crc kubenswrapper[4914]: I0127 13:57:57.547263 4914 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 13:57:57 crc kubenswrapper[4914]: I0127 13:57:57.647390 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jcblw"] Jan 27 13:57:57 crc kubenswrapper[4914]: W0127 13:57:57.651499 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3dde8fe_073a_4757_ae55_141e026db3ba.slice/crio-2ba6ba3b4834aa249159dbc6b31c688d675147b0c95343f80fb15cd0e5a83c2a WatchSource:0}: Error finding container 2ba6ba3b4834aa249159dbc6b31c688d675147b0c95343f80fb15cd0e5a83c2a: Status 404 returned error can't find the container with id 2ba6ba3b4834aa249159dbc6b31c688d675147b0c95343f80fb15cd0e5a83c2a Jan 27 13:57:57 crc kubenswrapper[4914]: I0127 13:57:57.691006 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-pszcq"] Jan 27 13:57:57 crc kubenswrapper[4914]: I0127 13:57:57.904512 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-9tnxd" event={"ID":"3f8a8e0d-b50d-48d2-b899-a22eef5a2253","Type":"ContainerStarted","Data":"71eb6c841347e42a0498fcc777c21fb9a7ff14b981e03173a264c28c6a3d4993"} Jan 27 13:57:57 crc kubenswrapper[4914]: I0127 13:57:57.905735 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-pszcq" event={"ID":"f1f5b8cb-4004-4bab-89c7-f730f69d8ca2","Type":"ContainerStarted","Data":"4057c224f9b2cfc87d0333ddc5d1bb563d54e87887231bdc0abedf5498c93dd2"} Jan 27 13:57:57 crc kubenswrapper[4914]: I0127 13:57:57.907018 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jcblw" event={"ID":"d3dde8fe-073a-4757-ae55-141e026db3ba","Type":"ContainerStarted","Data":"2ba6ba3b4834aa249159dbc6b31c688d675147b0c95343f80fb15cd0e5a83c2a"} Jan 27 13:58:00 crc kubenswrapper[4914]: I0127 13:58:00.924656 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jcblw" event={"ID":"d3dde8fe-073a-4757-ae55-141e026db3ba","Type":"ContainerStarted","Data":"4ca61b49f4404543c76cb6926d30a2272c315537332458a8124fa6b0653b4598"} Jan 27 13:58:00 crc kubenswrapper[4914]: I0127 13:58:00.926006 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-pszcq" event={"ID":"f1f5b8cb-4004-4bab-89c7-f730f69d8ca2","Type":"ContainerStarted","Data":"751e126d01e6110b40f4be8e9068e9056ce5e6a9e47691d7a81e3bd87a6cbb6a"} Jan 27 13:58:00 crc kubenswrapper[4914]: I0127 13:58:00.939462 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jcblw" podStartSLOduration=1.889311821 podStartE2EDuration="4.939447119s" podCreationTimestamp="2026-01-27 13:57:56 +0000 UTC" firstStartedPulling="2026-01-27 13:57:57.653770744 +0000 UTC m=+835.966120829" lastFinishedPulling="2026-01-27 13:58:00.703906022 +0000 UTC m=+839.016256127" observedRunningTime="2026-01-27 13:58:00.938309357 +0000 UTC m=+839.250659452" watchObservedRunningTime="2026-01-27 13:58:00.939447119 +0000 UTC m=+839.251797204" Jan 27 13:58:00 crc kubenswrapper[4914]: I0127 13:58:00.956129 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-pszcq" podStartSLOduration=1.951244917 podStartE2EDuration="4.956112181s" podCreationTimestamp="2026-01-27 13:57:56 +0000 UTC" firstStartedPulling="2026-01-27 13:57:57.694385999 +0000 UTC m=+836.006736084" lastFinishedPulling="2026-01-27 13:58:00.699253263 +0000 UTC m=+839.011603348" observedRunningTime="2026-01-27 13:58:00.952529321 +0000 UTC m=+839.264879416" watchObservedRunningTime="2026-01-27 13:58:00.956112181 +0000 UTC m=+839.268462266" Jan 27 13:58:01 crc kubenswrapper[4914]: I0127 13:58:01.933166 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-9tnxd" event={"ID":"3f8a8e0d-b50d-48d2-b899-a22eef5a2253","Type":"ContainerStarted","Data":"069fbdc1c584ba4c82bf377646136a43430e260718a39e79451815122207f409"} Jan 27 13:58:01 crc kubenswrapper[4914]: I0127 13:58:01.965193 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-9tnxd" podStartSLOduration=2.723819598 podStartE2EDuration="5.965161835s" podCreationTimestamp="2026-01-27 13:57:56 +0000 UTC" firstStartedPulling="2026-01-27 13:57:57.546979594 +0000 UTC m=+835.859329679" lastFinishedPulling="2026-01-27 13:58:00.788321831 +0000 UTC m=+839.100671916" observedRunningTime="2026-01-27 13:58:01.956180966 +0000 UTC m=+840.268531061" watchObservedRunningTime="2026-01-27 13:58:01.965161835 +0000 UTC m=+840.277511960" Jan 27 13:58:02 crc kubenswrapper[4914]: I0127 13:58:02.269651 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-9tnxd" Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.632929 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7m5xg"] Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.633310 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovn-controller" containerID="cri-o://4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258" gracePeriod=30 Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.633743 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="northd" containerID="cri-o://009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d" gracePeriod=30 Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.633902 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d" gracePeriod=30 Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.633866 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="nbdb" containerID="cri-o://4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef" gracePeriod=30 Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.633988 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovn-acl-logging" containerID="cri-o://4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b" gracePeriod=30 Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.633996 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="kube-rbac-proxy-node" containerID="cri-o://60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9" gracePeriod=30 Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.633758 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="sbdb" containerID="cri-o://cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10" gracePeriod=30 Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.695190 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovnkube-controller" containerID="cri-o://4c154c6ae2cf3ee27bc31d0512ec48f46cf0ed6d8ebf30769e48987700f5d73c" gracePeriod=30 Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.950773 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b628_38170a87-0bc0-4c7d-b7a0-45b86a1f79e3/kube-multus/2.log" Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.951442 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b628_38170a87-0bc0-4c7d-b7a0-45b86a1f79e3/kube-multus/1.log" Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.951482 4914 generic.go:334] "Generic (PLEG): container finished" podID="38170a87-0bc0-4c7d-b7a0-45b86a1f79e3" containerID="807f07d403d8741c6222e04cae0ce46ded60de609107b76c001b4f4282dcbb15" exitCode=2 Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.951535 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6b628" event={"ID":"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3","Type":"ContainerDied","Data":"807f07d403d8741c6222e04cae0ce46ded60de609107b76c001b4f4282dcbb15"} Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.951567 4914 scope.go:117] "RemoveContainer" containerID="cbd46cb10b4609f1c672c23d55dbb211fb5f130fc861dbc837fa5be5b44a2f90" Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.952089 4914 scope.go:117] "RemoveContainer" containerID="807f07d403d8741c6222e04cae0ce46ded60de609107b76c001b4f4282dcbb15" Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.956158 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovnkube-controller/3.log" Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.961740 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovn-acl-logging/0.log" Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.962289 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovn-controller/0.log" Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.962765 4914 generic.go:334] "Generic (PLEG): container finished" podID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerID="4c154c6ae2cf3ee27bc31d0512ec48f46cf0ed6d8ebf30769e48987700f5d73c" exitCode=0 Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.962796 4914 generic.go:334] "Generic (PLEG): container finished" podID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerID="4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef" exitCode=0 Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.962804 4914 generic.go:334] "Generic (PLEG): container finished" podID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerID="009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d" exitCode=0 Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.962810 4914 generic.go:334] "Generic (PLEG): container finished" podID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerID="c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d" exitCode=0 Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.962817 4914 generic.go:334] "Generic (PLEG): container finished" podID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerID="60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9" exitCode=0 Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.962826 4914 generic.go:334] "Generic (PLEG): container finished" podID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerID="4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b" exitCode=143 Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.962836 4914 generic.go:334] "Generic (PLEG): container finished" podID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerID="4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258" exitCode=143 Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.962869 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerDied","Data":"4c154c6ae2cf3ee27bc31d0512ec48f46cf0ed6d8ebf30769e48987700f5d73c"} Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.962912 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerDied","Data":"4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef"} Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.962927 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerDied","Data":"009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d"} Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.962938 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerDied","Data":"c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d"} Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.962952 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerDied","Data":"60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9"} Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.962963 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerDied","Data":"4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b"} Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.962974 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerDied","Data":"4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258"} Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.989110 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovnkube-controller/3.log" Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.992656 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovn-acl-logging/0.log" Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.993269 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovn-controller/0.log" Jan 27 13:58:04 crc kubenswrapper[4914]: I0127 13:58:04.994320 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.002134 4914 scope.go:117] "RemoveContainer" containerID="a16d765b49acc107009e3c8ebfc08e72f9e2772b3f0b03936a26dd8ff4fa1cf5" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.059888 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hp29p"] Jan 27 13:58:05 crc kubenswrapper[4914]: E0127 13:58:05.060300 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovnkube-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060324 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovnkube-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: E0127 13:58:05.060340 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="sbdb" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060349 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="sbdb" Jan 27 13:58:05 crc kubenswrapper[4914]: E0127 13:58:05.060367 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060375 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 13:58:05 crc kubenswrapper[4914]: E0127 13:58:05.060387 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovnkube-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060394 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovnkube-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: E0127 13:58:05.060406 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="kube-rbac-proxy-node" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060414 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="kube-rbac-proxy-node" Jan 27 13:58:05 crc kubenswrapper[4914]: E0127 13:58:05.060424 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovn-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060431 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovn-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: E0127 13:58:05.060442 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="northd" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060449 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="northd" Jan 27 13:58:05 crc kubenswrapper[4914]: E0127 13:58:05.060461 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="kubecfg-setup" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060468 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="kubecfg-setup" Jan 27 13:58:05 crc kubenswrapper[4914]: E0127 13:58:05.060478 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovnkube-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060485 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovnkube-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: E0127 13:58:05.060495 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovn-acl-logging" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060502 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovn-acl-logging" Jan 27 13:58:05 crc kubenswrapper[4914]: E0127 13:58:05.060510 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovnkube-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060517 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovnkube-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: E0127 13:58:05.060528 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="nbdb" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060535 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="nbdb" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060643 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="sbdb" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060658 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovnkube-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060667 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovnkube-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060677 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovn-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060687 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovnkube-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060698 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="kube-rbac-proxy-node" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060707 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovnkube-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060715 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovnkube-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060725 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovn-acl-logging" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060734 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060743 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="nbdb" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060750 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="northd" Jan 27 13:58:05 crc kubenswrapper[4914]: E0127 13:58:05.060881 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovnkube-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.060892 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerName="ovnkube-controller" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.062948 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096477 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1adce282-c454-4aa2-9cbe-356c7d371f98-ovn-node-metrics-cert\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096544 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-ovn\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096587 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-var-lib-openvswitch\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096607 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-run-netns\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096631 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-cni-bin\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096654 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-ovnkube-config\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096667 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096673 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-run-ovn-kubernetes\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096676 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096731 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096740 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-openvswitch\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096768 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-slash\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096772 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096792 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-node-log\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096796 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096831 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096854 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-node-log" (OuterVolumeSpecName: "node-log") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096810 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-etc-openvswitch\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096883 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-slash" (OuterVolumeSpecName: "host-slash") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096915 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-cni-netd\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096958 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-env-overrides\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.096988 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-ovnkube-script-lib\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097009 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-kubelet\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097011 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097031 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-systemd\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097051 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpnbf\" (UniqueName: \"kubernetes.io/projected/1adce282-c454-4aa2-9cbe-356c7d371f98-kube-api-access-vpnbf\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097070 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-var-lib-cni-networks-ovn-kubernetes\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097093 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-systemd-units\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097114 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-log-socket\") pod \"1adce282-c454-4aa2-9cbe-356c7d371f98\" (UID: \"1adce282-c454-4aa2-9cbe-356c7d371f98\") " Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097181 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097178 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097242 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097251 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097277 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097300 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097336 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-log-socket" (OuterVolumeSpecName: "log-socket") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097496 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097706 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-run-openvswitch\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097732 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkj2d\" (UniqueName: \"kubernetes.io/projected/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-kube-api-access-rkj2d\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097810 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-ovnkube-script-lib\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097902 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-ovn-node-metrics-cert\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097961 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-run-ovn-kubernetes\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097981 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-cni-bin\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.097998 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-run-ovn\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098018 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-systemd-units\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098071 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-env-overrides\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098097 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-run-systemd\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098149 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-cni-netd\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098205 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-etc-openvswitch\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098226 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-log-socket\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098244 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-ovnkube-config\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098322 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098353 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-slash\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098422 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-var-lib-openvswitch\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098461 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-kubelet\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098481 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-run-netns\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098529 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-node-log\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098607 4914 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098635 4914 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098647 4914 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098656 4914 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098667 4914 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098677 4914 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098721 4914 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098732 4914 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098743 4914 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098753 4914 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098763 4914 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098774 4914 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098784 4914 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1adce282-c454-4aa2-9cbe-356c7d371f98-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098795 4914 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098806 4914 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098819 4914 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.098844 4914 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.102251 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1adce282-c454-4aa2-9cbe-356c7d371f98-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.104759 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1adce282-c454-4aa2-9cbe-356c7d371f98-kube-api-access-vpnbf" (OuterVolumeSpecName: "kube-api-access-vpnbf") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "kube-api-access-vpnbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.111581 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "1adce282-c454-4aa2-9cbe-356c7d371f98" (UID: "1adce282-c454-4aa2-9cbe-356c7d371f98"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200072 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-ovn-node-metrics-cert\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200125 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-cni-bin\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200145 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-run-ovn-kubernetes\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200169 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-run-ovn\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200191 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-systemd-units\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200213 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-env-overrides\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200240 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-run-systemd\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200260 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-cni-netd\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200285 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-etc-openvswitch\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200546 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-ovnkube-config\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200569 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-log-socket\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200601 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200637 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-slash\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200670 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-var-lib-openvswitch\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200295 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-run-systemd\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200711 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-kubelet\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200376 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-run-ovn\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200733 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-run-netns\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200757 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-slash\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200356 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-systemd-units\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200788 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-node-log\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200796 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-kubelet\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200237 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-run-ovn-kubernetes\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200812 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-log-socket\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200327 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-cni-netd\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200332 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-etc-openvswitch\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200761 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-node-log\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200885 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-var-lib-openvswitch\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200814 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-run-netns\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200762 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200898 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-run-openvswitch\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200917 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-run-openvswitch\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.200298 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-host-cni-bin\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.201039 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkj2d\" (UniqueName: \"kubernetes.io/projected/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-kube-api-access-rkj2d\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.201162 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-ovnkube-script-lib\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.201258 4914 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1adce282-c454-4aa2-9cbe-356c7d371f98-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.201280 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpnbf\" (UniqueName: \"kubernetes.io/projected/1adce282-c454-4aa2-9cbe-356c7d371f98-kube-api-access-vpnbf\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.201301 4914 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1adce282-c454-4aa2-9cbe-356c7d371f98-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.202537 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-env-overrides\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.202687 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-ovnkube-script-lib\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.203024 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-ovnkube-config\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.204976 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-ovn-node-metrics-cert\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.215963 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkj2d\" (UniqueName: \"kubernetes.io/projected/ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9-kube-api-access-rkj2d\") pod \"ovnkube-node-hp29p\" (UID: \"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.377405 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:05 crc kubenswrapper[4914]: W0127 13:58:05.396225 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded5b35d5_4c0a_4a73_93bc_6d64009cd6f9.slice/crio-9bdb3afbcdbf59a90a15675b7fd1282785051aec21897156db0ec3414a53779b WatchSource:0}: Error finding container 9bdb3afbcdbf59a90a15675b7fd1282785051aec21897156db0ec3414a53779b: Status 404 returned error can't find the container with id 9bdb3afbcdbf59a90a15675b7fd1282785051aec21897156db0ec3414a53779b Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.969982 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b628_38170a87-0bc0-4c7d-b7a0-45b86a1f79e3/kube-multus/2.log" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.970096 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6b628" event={"ID":"38170a87-0bc0-4c7d-b7a0-45b86a1f79e3","Type":"ContainerStarted","Data":"62592d4bfed6720609529c31a08eca8217fa009c604014bc4550126167d1c984"} Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.974515 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovn-acl-logging/0.log" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.975161 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7m5xg_1adce282-c454-4aa2-9cbe-356c7d371f98/ovn-controller/0.log" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.975531 4914 generic.go:334] "Generic (PLEG): container finished" podID="1adce282-c454-4aa2-9cbe-356c7d371f98" containerID="cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10" exitCode=0 Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.975609 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerDied","Data":"cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10"} Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.975639 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" event={"ID":"1adce282-c454-4aa2-9cbe-356c7d371f98","Type":"ContainerDied","Data":"44a393d239c0a7f128ed5c5ba1ba6dd4a5a5d354c49c94f07c6bb9b4c9afff82"} Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.975659 4914 scope.go:117] "RemoveContainer" containerID="4c154c6ae2cf3ee27bc31d0512ec48f46cf0ed6d8ebf30769e48987700f5d73c" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.975674 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7m5xg" Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.977743 4914 generic.go:334] "Generic (PLEG): container finished" podID="ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9" containerID="cdf9667fbee4fff2e2c1ace151d3e944cec7e3a78bd2544f6b0f8196a94c07c0" exitCode=0 Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.977762 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" event={"ID":"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9","Type":"ContainerDied","Data":"cdf9667fbee4fff2e2c1ace151d3e944cec7e3a78bd2544f6b0f8196a94c07c0"} Jan 27 13:58:05 crc kubenswrapper[4914]: I0127 13:58:05.977775 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" event={"ID":"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9","Type":"ContainerStarted","Data":"9bdb3afbcdbf59a90a15675b7fd1282785051aec21897156db0ec3414a53779b"} Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.012180 4914 scope.go:117] "RemoveContainer" containerID="cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.038205 4914 scope.go:117] "RemoveContainer" containerID="4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.068744 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7m5xg"] Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.069190 4914 scope.go:117] "RemoveContainer" containerID="009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.072326 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7m5xg"] Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.083255 4914 scope.go:117] "RemoveContainer" containerID="c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.096715 4914 scope.go:117] "RemoveContainer" containerID="60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.108037 4914 scope.go:117] "RemoveContainer" containerID="4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.122646 4914 scope.go:117] "RemoveContainer" containerID="4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.143525 4914 scope.go:117] "RemoveContainer" containerID="10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.162305 4914 scope.go:117] "RemoveContainer" containerID="4c154c6ae2cf3ee27bc31d0512ec48f46cf0ed6d8ebf30769e48987700f5d73c" Jan 27 13:58:06 crc kubenswrapper[4914]: E0127 13:58:06.162892 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c154c6ae2cf3ee27bc31d0512ec48f46cf0ed6d8ebf30769e48987700f5d73c\": container with ID starting with 4c154c6ae2cf3ee27bc31d0512ec48f46cf0ed6d8ebf30769e48987700f5d73c not found: ID does not exist" containerID="4c154c6ae2cf3ee27bc31d0512ec48f46cf0ed6d8ebf30769e48987700f5d73c" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.162943 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c154c6ae2cf3ee27bc31d0512ec48f46cf0ed6d8ebf30769e48987700f5d73c"} err="failed to get container status \"4c154c6ae2cf3ee27bc31d0512ec48f46cf0ed6d8ebf30769e48987700f5d73c\": rpc error: code = NotFound desc = could not find container \"4c154c6ae2cf3ee27bc31d0512ec48f46cf0ed6d8ebf30769e48987700f5d73c\": container with ID starting with 4c154c6ae2cf3ee27bc31d0512ec48f46cf0ed6d8ebf30769e48987700f5d73c not found: ID does not exist" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.162976 4914 scope.go:117] "RemoveContainer" containerID="cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10" Jan 27 13:58:06 crc kubenswrapper[4914]: E0127 13:58:06.163954 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\": container with ID starting with cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10 not found: ID does not exist" containerID="cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.164039 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10"} err="failed to get container status \"cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\": rpc error: code = NotFound desc = could not find container \"cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10\": container with ID starting with cd4ee45cd32f12a95aaa5b1cf3aaafedc6c4c4746fdf5d2b7f5672ebae7cea10 not found: ID does not exist" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.164115 4914 scope.go:117] "RemoveContainer" containerID="4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef" Jan 27 13:58:06 crc kubenswrapper[4914]: E0127 13:58:06.164538 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\": container with ID starting with 4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef not found: ID does not exist" containerID="4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.164578 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef"} err="failed to get container status \"4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\": rpc error: code = NotFound desc = could not find container \"4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef\": container with ID starting with 4aec0fa1fc7da525793857c0eb0d67311462db629bd2acf9c966e39901c3f5ef not found: ID does not exist" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.164793 4914 scope.go:117] "RemoveContainer" containerID="009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d" Jan 27 13:58:06 crc kubenswrapper[4914]: E0127 13:58:06.165615 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\": container with ID starting with 009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d not found: ID does not exist" containerID="009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.165669 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d"} err="failed to get container status \"009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\": rpc error: code = NotFound desc = could not find container \"009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d\": container with ID starting with 009fce1c5bfb1e11de72f9374c2fa2660cfb09fe61df3187cadbfa0245ce437d not found: ID does not exist" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.165704 4914 scope.go:117] "RemoveContainer" containerID="c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d" Jan 27 13:58:06 crc kubenswrapper[4914]: E0127 13:58:06.166182 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\": container with ID starting with c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d not found: ID does not exist" containerID="c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.166218 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d"} err="failed to get container status \"c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\": rpc error: code = NotFound desc = could not find container \"c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d\": container with ID starting with c0fff52c4155b5f8182e9d4e7d41aa7207ddeb8e0755dbea82208f9b6d902b3d not found: ID does not exist" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.166239 4914 scope.go:117] "RemoveContainer" containerID="60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9" Jan 27 13:58:06 crc kubenswrapper[4914]: E0127 13:58:06.166471 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\": container with ID starting with 60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9 not found: ID does not exist" containerID="60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.166494 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9"} err="failed to get container status \"60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\": rpc error: code = NotFound desc = could not find container \"60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9\": container with ID starting with 60c352a07d8ceeecad1357d4384600df87cfb59878c665a19f011c60147058e9 not found: ID does not exist" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.166512 4914 scope.go:117] "RemoveContainer" containerID="4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b" Jan 27 13:58:06 crc kubenswrapper[4914]: E0127 13:58:06.166814 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\": container with ID starting with 4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b not found: ID does not exist" containerID="4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.166849 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b"} err="failed to get container status \"4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\": rpc error: code = NotFound desc = could not find container \"4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b\": container with ID starting with 4bc18961257d7002f03c1f248a84a9704c16f73c1f204bb3a372a800a091587b not found: ID does not exist" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.166865 4914 scope.go:117] "RemoveContainer" containerID="4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258" Jan 27 13:58:06 crc kubenswrapper[4914]: E0127 13:58:06.167127 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\": container with ID starting with 4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258 not found: ID does not exist" containerID="4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.167149 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258"} err="failed to get container status \"4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\": rpc error: code = NotFound desc = could not find container \"4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258\": container with ID starting with 4d6de0c60a477754e85c2f8e400b4ebc1dd1decacd40c72a2e258baae26d7258 not found: ID does not exist" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.167175 4914 scope.go:117] "RemoveContainer" containerID="10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5" Jan 27 13:58:06 crc kubenswrapper[4914]: E0127 13:58:06.167491 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\": container with ID starting with 10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5 not found: ID does not exist" containerID="10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.167523 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5"} err="failed to get container status \"10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\": rpc error: code = NotFound desc = could not find container \"10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5\": container with ID starting with 10adf782e126ff7748e98746487a844ad42c84225e318c9c75341b09ac1a3ab5 not found: ID does not exist" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.308135 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1adce282-c454-4aa2-9cbe-356c7d371f98" path="/var/lib/kubelet/pods/1adce282-c454-4aa2-9cbe-356c7d371f98/volumes" Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.986778 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" event={"ID":"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9","Type":"ContainerStarted","Data":"a9203326b69baebb5210fcfaaf9184387603ed1b39eea6f5c826bc291e277305"} Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.987242 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" event={"ID":"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9","Type":"ContainerStarted","Data":"6f71e8bfb2b9eed6e566c2ee9e39e0d70794757e4092af813b0120eb67f01321"} Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.987270 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" event={"ID":"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9","Type":"ContainerStarted","Data":"f33777125e0b8f04b693e4902e0d855ec91649f373564a4496e7d5819d807ea9"} Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.987289 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" event={"ID":"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9","Type":"ContainerStarted","Data":"d270e75c4a35ee48f5a6abfc90e0c9ad74b1c7aad5208aa27b0dcd631bef520c"} Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.987307 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" event={"ID":"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9","Type":"ContainerStarted","Data":"b6e8b2471dcfebc2daf3c5e085899eb8c5e244a572fd9f0fac28523e680a7601"} Jan 27 13:58:06 crc kubenswrapper[4914]: I0127 13:58:06.987324 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" event={"ID":"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9","Type":"ContainerStarted","Data":"920bf19bab36010aae0a67dd8fc27b23c58abfa7571c7bc82ca6717cd13d7197"} Jan 27 13:58:07 crc kubenswrapper[4914]: I0127 13:58:07.272198 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-9tnxd" Jan 27 13:58:09 crc kubenswrapper[4914]: I0127 13:58:09.002166 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" event={"ID":"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9","Type":"ContainerStarted","Data":"c4bec4b2bb188528c7cb26884e700cfeefc268a4c31300dbd9c6d1f0b76752e3"} Jan 27 13:58:12 crc kubenswrapper[4914]: I0127 13:58:12.030653 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" event={"ID":"ed5b35d5-4c0a-4a73-93bc-6d64009cd6f9","Type":"ContainerStarted","Data":"c2e2797635d7d2f5f9b10d8088181cae1bb92b1f9bb36ffa137042bceec5699c"} Jan 27 13:58:12 crc kubenswrapper[4914]: I0127 13:58:12.031034 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:12 crc kubenswrapper[4914]: I0127 13:58:12.031050 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:12 crc kubenswrapper[4914]: I0127 13:58:12.031060 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:12 crc kubenswrapper[4914]: I0127 13:58:12.085057 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" podStartSLOduration=7.085042373 podStartE2EDuration="7.085042373s" podCreationTimestamp="2026-01-27 13:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:58:12.081986259 +0000 UTC m=+850.394336374" watchObservedRunningTime="2026-01-27 13:58:12.085042373 +0000 UTC m=+850.397392458" Jan 27 13:58:12 crc kubenswrapper[4914]: I0127 13:58:12.086939 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:12 crc kubenswrapper[4914]: I0127 13:58:12.087554 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:35 crc kubenswrapper[4914]: I0127 13:58:35.411076 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hp29p" Jan 27 13:58:48 crc kubenswrapper[4914]: I0127 13:58:48.485115 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4"] Jan 27 13:58:48 crc kubenswrapper[4914]: I0127 13:58:48.486876 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" Jan 27 13:58:48 crc kubenswrapper[4914]: I0127 13:58:48.489237 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 13:58:48 crc kubenswrapper[4914]: I0127 13:58:48.498811 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4"] Jan 27 13:58:48 crc kubenswrapper[4914]: I0127 13:58:48.572137 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6sxj\" (UniqueName: \"kubernetes.io/projected/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-kube-api-access-r6sxj\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4\" (UID: \"48d0fa7e-8e07-40a1-813d-0eee2fcf2895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" Jan 27 13:58:48 crc kubenswrapper[4914]: I0127 13:58:48.572203 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4\" (UID: \"48d0fa7e-8e07-40a1-813d-0eee2fcf2895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" Jan 27 13:58:48 crc kubenswrapper[4914]: I0127 13:58:48.572224 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4\" (UID: \"48d0fa7e-8e07-40a1-813d-0eee2fcf2895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" Jan 27 13:58:48 crc kubenswrapper[4914]: I0127 13:58:48.673216 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4\" (UID: \"48d0fa7e-8e07-40a1-813d-0eee2fcf2895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" Jan 27 13:58:48 crc kubenswrapper[4914]: I0127 13:58:48.673262 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4\" (UID: \"48d0fa7e-8e07-40a1-813d-0eee2fcf2895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" Jan 27 13:58:48 crc kubenswrapper[4914]: I0127 13:58:48.673314 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6sxj\" (UniqueName: \"kubernetes.io/projected/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-kube-api-access-r6sxj\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4\" (UID: \"48d0fa7e-8e07-40a1-813d-0eee2fcf2895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" Jan 27 13:58:48 crc kubenswrapper[4914]: I0127 13:58:48.673952 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4\" (UID: \"48d0fa7e-8e07-40a1-813d-0eee2fcf2895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" Jan 27 13:58:48 crc kubenswrapper[4914]: I0127 13:58:48.673977 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4\" (UID: \"48d0fa7e-8e07-40a1-813d-0eee2fcf2895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" Jan 27 13:58:48 crc kubenswrapper[4914]: I0127 13:58:48.697621 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6sxj\" (UniqueName: \"kubernetes.io/projected/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-kube-api-access-r6sxj\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4\" (UID: \"48d0fa7e-8e07-40a1-813d-0eee2fcf2895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" Jan 27 13:58:48 crc kubenswrapper[4914]: I0127 13:58:48.807466 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" Jan 27 13:58:48 crc kubenswrapper[4914]: I0127 13:58:48.989039 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4"] Jan 27 13:58:49 crc kubenswrapper[4914]: I0127 13:58:49.246106 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" event={"ID":"48d0fa7e-8e07-40a1-813d-0eee2fcf2895","Type":"ContainerStarted","Data":"8ec238bdf00ae005ef0a6482b31af3d1a018d86830e24ec12a716fedeb59d431"} Jan 27 13:58:49 crc kubenswrapper[4914]: I0127 13:58:49.246452 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" event={"ID":"48d0fa7e-8e07-40a1-813d-0eee2fcf2895","Type":"ContainerStarted","Data":"4d3a61a149ebe171349f817702059045c95d5bf610006c96fc12bd5e9e0599d3"} Jan 27 13:58:50 crc kubenswrapper[4914]: I0127 13:58:50.255554 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" event={"ID":"48d0fa7e-8e07-40a1-813d-0eee2fcf2895","Type":"ContainerDied","Data":"8ec238bdf00ae005ef0a6482b31af3d1a018d86830e24ec12a716fedeb59d431"} Jan 27 13:58:50 crc kubenswrapper[4914]: I0127 13:58:50.255367 4914 generic.go:334] "Generic (PLEG): container finished" podID="48d0fa7e-8e07-40a1-813d-0eee2fcf2895" containerID="8ec238bdf00ae005ef0a6482b31af3d1a018d86830e24ec12a716fedeb59d431" exitCode=0 Jan 27 13:58:50 crc kubenswrapper[4914]: I0127 13:58:50.623777 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6bpzw"] Jan 27 13:58:50 crc kubenswrapper[4914]: I0127 13:58:50.624783 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:58:50 crc kubenswrapper[4914]: I0127 13:58:50.641879 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6bpzw"] Jan 27 13:58:50 crc kubenswrapper[4914]: I0127 13:58:50.696654 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqr6c\" (UniqueName: \"kubernetes.io/projected/9b7fdb3d-4535-46d1-a938-e45895d04c56-kube-api-access-hqr6c\") pod \"redhat-operators-6bpzw\" (UID: \"9b7fdb3d-4535-46d1-a938-e45895d04c56\") " pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:58:50 crc kubenswrapper[4914]: I0127 13:58:50.696709 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b7fdb3d-4535-46d1-a938-e45895d04c56-utilities\") pod \"redhat-operators-6bpzw\" (UID: \"9b7fdb3d-4535-46d1-a938-e45895d04c56\") " pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:58:50 crc kubenswrapper[4914]: I0127 13:58:50.696899 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b7fdb3d-4535-46d1-a938-e45895d04c56-catalog-content\") pod \"redhat-operators-6bpzw\" (UID: \"9b7fdb3d-4535-46d1-a938-e45895d04c56\") " pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:58:50 crc kubenswrapper[4914]: I0127 13:58:50.798511 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqr6c\" (UniqueName: \"kubernetes.io/projected/9b7fdb3d-4535-46d1-a938-e45895d04c56-kube-api-access-hqr6c\") pod \"redhat-operators-6bpzw\" (UID: \"9b7fdb3d-4535-46d1-a938-e45895d04c56\") " pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:58:50 crc kubenswrapper[4914]: I0127 13:58:50.798548 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b7fdb3d-4535-46d1-a938-e45895d04c56-utilities\") pod \"redhat-operators-6bpzw\" (UID: \"9b7fdb3d-4535-46d1-a938-e45895d04c56\") " pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:58:50 crc kubenswrapper[4914]: I0127 13:58:50.798591 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b7fdb3d-4535-46d1-a938-e45895d04c56-catalog-content\") pod \"redhat-operators-6bpzw\" (UID: \"9b7fdb3d-4535-46d1-a938-e45895d04c56\") " pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:58:50 crc kubenswrapper[4914]: I0127 13:58:50.799052 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b7fdb3d-4535-46d1-a938-e45895d04c56-utilities\") pod \"redhat-operators-6bpzw\" (UID: \"9b7fdb3d-4535-46d1-a938-e45895d04c56\") " pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:58:50 crc kubenswrapper[4914]: I0127 13:58:50.799078 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b7fdb3d-4535-46d1-a938-e45895d04c56-catalog-content\") pod \"redhat-operators-6bpzw\" (UID: \"9b7fdb3d-4535-46d1-a938-e45895d04c56\") " pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:58:50 crc kubenswrapper[4914]: I0127 13:58:50.825889 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqr6c\" (UniqueName: \"kubernetes.io/projected/9b7fdb3d-4535-46d1-a938-e45895d04c56-kube-api-access-hqr6c\") pod \"redhat-operators-6bpzw\" (UID: \"9b7fdb3d-4535-46d1-a938-e45895d04c56\") " pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:58:50 crc kubenswrapper[4914]: I0127 13:58:50.942231 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:58:51 crc kubenswrapper[4914]: I0127 13:58:51.164361 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6bpzw"] Jan 27 13:58:51 crc kubenswrapper[4914]: W0127 13:58:51.174032 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b7fdb3d_4535_46d1_a938_e45895d04c56.slice/crio-757b3fc1c9d25b31fe56fb0061aac578fb9a27ba10e72d96a417f4bf8461c74f WatchSource:0}: Error finding container 757b3fc1c9d25b31fe56fb0061aac578fb9a27ba10e72d96a417f4bf8461c74f: Status 404 returned error can't find the container with id 757b3fc1c9d25b31fe56fb0061aac578fb9a27ba10e72d96a417f4bf8461c74f Jan 27 13:58:51 crc kubenswrapper[4914]: I0127 13:58:51.264160 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bpzw" event={"ID":"9b7fdb3d-4535-46d1-a938-e45895d04c56","Type":"ContainerStarted","Data":"757b3fc1c9d25b31fe56fb0061aac578fb9a27ba10e72d96a417f4bf8461c74f"} Jan 27 13:58:52 crc kubenswrapper[4914]: I0127 13:58:52.270093 4914 generic.go:334] "Generic (PLEG): container finished" podID="9b7fdb3d-4535-46d1-a938-e45895d04c56" containerID="8def9061171b1b8c4e765de4cdd923ae1ae180870b1b6673d0abf78b73df498b" exitCode=0 Jan 27 13:58:52 crc kubenswrapper[4914]: I0127 13:58:52.270175 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bpzw" event={"ID":"9b7fdb3d-4535-46d1-a938-e45895d04c56","Type":"ContainerDied","Data":"8def9061171b1b8c4e765de4cdd923ae1ae180870b1b6673d0abf78b73df498b"} Jan 27 13:58:52 crc kubenswrapper[4914]: I0127 13:58:52.274909 4914 generic.go:334] "Generic (PLEG): container finished" podID="48d0fa7e-8e07-40a1-813d-0eee2fcf2895" containerID="87511a3136f2557ab1b54a91e8b2194092b55aa00de0480d01f18db31e4bf94f" exitCode=0 Jan 27 13:58:52 crc kubenswrapper[4914]: I0127 13:58:52.274940 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" event={"ID":"48d0fa7e-8e07-40a1-813d-0eee2fcf2895","Type":"ContainerDied","Data":"87511a3136f2557ab1b54a91e8b2194092b55aa00de0480d01f18db31e4bf94f"} Jan 27 13:58:53 crc kubenswrapper[4914]: I0127 13:58:53.285731 4914 generic.go:334] "Generic (PLEG): container finished" podID="48d0fa7e-8e07-40a1-813d-0eee2fcf2895" containerID="2ddf1b0e5e97300d4a3c85d43dd3a6efcedb554c13ee2c8cf15bda3a857a6cf0" exitCode=0 Jan 27 13:58:53 crc kubenswrapper[4914]: I0127 13:58:53.285881 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" event={"ID":"48d0fa7e-8e07-40a1-813d-0eee2fcf2895","Type":"ContainerDied","Data":"2ddf1b0e5e97300d4a3c85d43dd3a6efcedb554c13ee2c8cf15bda3a857a6cf0"} Jan 27 13:58:53 crc kubenswrapper[4914]: I0127 13:58:53.288616 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bpzw" event={"ID":"9b7fdb3d-4535-46d1-a938-e45895d04c56","Type":"ContainerStarted","Data":"7b8fd3e27e467c34640aef042181e19dd5110e710921785a9a76046d98ad28d1"} Jan 27 13:58:54 crc kubenswrapper[4914]: I0127 13:58:54.294804 4914 generic.go:334] "Generic (PLEG): container finished" podID="9b7fdb3d-4535-46d1-a938-e45895d04c56" containerID="7b8fd3e27e467c34640aef042181e19dd5110e710921785a9a76046d98ad28d1" exitCode=0 Jan 27 13:58:54 crc kubenswrapper[4914]: I0127 13:58:54.304256 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bpzw" event={"ID":"9b7fdb3d-4535-46d1-a938-e45895d04c56","Type":"ContainerDied","Data":"7b8fd3e27e467c34640aef042181e19dd5110e710921785a9a76046d98ad28d1"} Jan 27 13:58:54 crc kubenswrapper[4914]: I0127 13:58:54.531865 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" Jan 27 13:58:54 crc kubenswrapper[4914]: I0127 13:58:54.645130 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-util\") pod \"48d0fa7e-8e07-40a1-813d-0eee2fcf2895\" (UID: \"48d0fa7e-8e07-40a1-813d-0eee2fcf2895\") " Jan 27 13:58:54 crc kubenswrapper[4914]: I0127 13:58:54.645194 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-bundle\") pod \"48d0fa7e-8e07-40a1-813d-0eee2fcf2895\" (UID: \"48d0fa7e-8e07-40a1-813d-0eee2fcf2895\") " Jan 27 13:58:54 crc kubenswrapper[4914]: I0127 13:58:54.645282 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6sxj\" (UniqueName: \"kubernetes.io/projected/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-kube-api-access-r6sxj\") pod \"48d0fa7e-8e07-40a1-813d-0eee2fcf2895\" (UID: \"48d0fa7e-8e07-40a1-813d-0eee2fcf2895\") " Jan 27 13:58:54 crc kubenswrapper[4914]: I0127 13:58:54.646680 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-bundle" (OuterVolumeSpecName: "bundle") pod "48d0fa7e-8e07-40a1-813d-0eee2fcf2895" (UID: "48d0fa7e-8e07-40a1-813d-0eee2fcf2895"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:58:54 crc kubenswrapper[4914]: I0127 13:58:54.651039 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-kube-api-access-r6sxj" (OuterVolumeSpecName: "kube-api-access-r6sxj") pod "48d0fa7e-8e07-40a1-813d-0eee2fcf2895" (UID: "48d0fa7e-8e07-40a1-813d-0eee2fcf2895"). InnerVolumeSpecName "kube-api-access-r6sxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:58:54 crc kubenswrapper[4914]: I0127 13:58:54.714877 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-util" (OuterVolumeSpecName: "util") pod "48d0fa7e-8e07-40a1-813d-0eee2fcf2895" (UID: "48d0fa7e-8e07-40a1-813d-0eee2fcf2895"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:58:54 crc kubenswrapper[4914]: I0127 13:58:54.747628 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6sxj\" (UniqueName: \"kubernetes.io/projected/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-kube-api-access-r6sxj\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:54 crc kubenswrapper[4914]: I0127 13:58:54.747667 4914 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-util\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:54 crc kubenswrapper[4914]: I0127 13:58:54.747676 4914 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48d0fa7e-8e07-40a1-813d-0eee2fcf2895-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:58:55 crc kubenswrapper[4914]: I0127 13:58:55.302795 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bpzw" event={"ID":"9b7fdb3d-4535-46d1-a938-e45895d04c56","Type":"ContainerStarted","Data":"47874d160c7e7befa67f99b6a61af51f6f427ea9177d17ce73234679c4951ded"} Jan 27 13:58:55 crc kubenswrapper[4914]: I0127 13:58:55.305600 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" event={"ID":"48d0fa7e-8e07-40a1-813d-0eee2fcf2895","Type":"ContainerDied","Data":"4d3a61a149ebe171349f817702059045c95d5bf610006c96fc12bd5e9e0599d3"} Jan 27 13:58:55 crc kubenswrapper[4914]: I0127 13:58:55.305643 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d3a61a149ebe171349f817702059045c95d5bf610006c96fc12bd5e9e0599d3" Jan 27 13:58:55 crc kubenswrapper[4914]: I0127 13:58:55.305654 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4" Jan 27 13:58:55 crc kubenswrapper[4914]: I0127 13:58:55.323242 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6bpzw" podStartSLOduration=2.569686581 podStartE2EDuration="5.323220084s" podCreationTimestamp="2026-01-27 13:58:50 +0000 UTC" firstStartedPulling="2026-01-27 13:58:52.27164974 +0000 UTC m=+890.583999825" lastFinishedPulling="2026-01-27 13:58:55.025183243 +0000 UTC m=+893.337533328" observedRunningTime="2026-01-27 13:58:55.32231777 +0000 UTC m=+893.634667855" watchObservedRunningTime="2026-01-27 13:58:55.323220084 +0000 UTC m=+893.635570169" Jan 27 13:58:59 crc kubenswrapper[4914]: I0127 13:58:59.766610 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-g7ck7"] Jan 27 13:58:59 crc kubenswrapper[4914]: E0127 13:58:59.767434 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d0fa7e-8e07-40a1-813d-0eee2fcf2895" containerName="extract" Jan 27 13:58:59 crc kubenswrapper[4914]: I0127 13:58:59.767451 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d0fa7e-8e07-40a1-813d-0eee2fcf2895" containerName="extract" Jan 27 13:58:59 crc kubenswrapper[4914]: E0127 13:58:59.767467 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d0fa7e-8e07-40a1-813d-0eee2fcf2895" containerName="pull" Jan 27 13:58:59 crc kubenswrapper[4914]: I0127 13:58:59.767473 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d0fa7e-8e07-40a1-813d-0eee2fcf2895" containerName="pull" Jan 27 13:58:59 crc kubenswrapper[4914]: E0127 13:58:59.767485 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d0fa7e-8e07-40a1-813d-0eee2fcf2895" containerName="util" Jan 27 13:58:59 crc kubenswrapper[4914]: I0127 13:58:59.767491 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d0fa7e-8e07-40a1-813d-0eee2fcf2895" containerName="util" Jan 27 13:58:59 crc kubenswrapper[4914]: I0127 13:58:59.767585 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="48d0fa7e-8e07-40a1-813d-0eee2fcf2895" containerName="extract" Jan 27 13:58:59 crc kubenswrapper[4914]: I0127 13:58:59.768079 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-g7ck7" Jan 27 13:58:59 crc kubenswrapper[4914]: I0127 13:58:59.770186 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-47x5t" Jan 27 13:58:59 crc kubenswrapper[4914]: I0127 13:58:59.770588 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 13:58:59 crc kubenswrapper[4914]: I0127 13:58:59.770698 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 13:58:59 crc kubenswrapper[4914]: I0127 13:58:59.780510 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-g7ck7"] Jan 27 13:58:59 crc kubenswrapper[4914]: I0127 13:58:59.812448 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbvd7\" (UniqueName: \"kubernetes.io/projected/e3d282b5-2dc5-4c0b-9a8f-aace2048b049-kube-api-access-rbvd7\") pod \"nmstate-operator-646758c888-g7ck7\" (UID: \"e3d282b5-2dc5-4c0b-9a8f-aace2048b049\") " pod="openshift-nmstate/nmstate-operator-646758c888-g7ck7" Jan 27 13:58:59 crc kubenswrapper[4914]: I0127 13:58:59.913619 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbvd7\" (UniqueName: \"kubernetes.io/projected/e3d282b5-2dc5-4c0b-9a8f-aace2048b049-kube-api-access-rbvd7\") pod \"nmstate-operator-646758c888-g7ck7\" (UID: \"e3d282b5-2dc5-4c0b-9a8f-aace2048b049\") " pod="openshift-nmstate/nmstate-operator-646758c888-g7ck7" Jan 27 13:58:59 crc kubenswrapper[4914]: I0127 13:58:59.931398 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbvd7\" (UniqueName: \"kubernetes.io/projected/e3d282b5-2dc5-4c0b-9a8f-aace2048b049-kube-api-access-rbvd7\") pod \"nmstate-operator-646758c888-g7ck7\" (UID: \"e3d282b5-2dc5-4c0b-9a8f-aace2048b049\") " pod="openshift-nmstate/nmstate-operator-646758c888-g7ck7" Jan 27 13:59:00 crc kubenswrapper[4914]: I0127 13:59:00.084740 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-g7ck7" Jan 27 13:59:00 crc kubenswrapper[4914]: I0127 13:59:00.328578 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-g7ck7"] Jan 27 13:59:00 crc kubenswrapper[4914]: I0127 13:59:00.943371 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:59:00 crc kubenswrapper[4914]: I0127 13:59:00.943421 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:59:00 crc kubenswrapper[4914]: I0127 13:59:00.983622 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:59:01 crc kubenswrapper[4914]: I0127 13:59:01.337008 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-g7ck7" event={"ID":"e3d282b5-2dc5-4c0b-9a8f-aace2048b049","Type":"ContainerStarted","Data":"335909bb5e818c8a619cf6743af957a3aff80fb6eaf77ad8f7f6f2debc384c2c"} Jan 27 13:59:01 crc kubenswrapper[4914]: I0127 13:59:01.377092 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:59:03 crc kubenswrapper[4914]: I0127 13:59:03.419209 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6bpzw"] Jan 27 13:59:03 crc kubenswrapper[4914]: I0127 13:59:03.419748 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6bpzw" podUID="9b7fdb3d-4535-46d1-a938-e45895d04c56" containerName="registry-server" containerID="cri-o://47874d160c7e7befa67f99b6a61af51f6f427ea9177d17ce73234679c4951ded" gracePeriod=2 Jan 27 13:59:03 crc kubenswrapper[4914]: I0127 13:59:03.804191 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:59:03 crc kubenswrapper[4914]: I0127 13:59:03.876006 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b7fdb3d-4535-46d1-a938-e45895d04c56-catalog-content\") pod \"9b7fdb3d-4535-46d1-a938-e45895d04c56\" (UID: \"9b7fdb3d-4535-46d1-a938-e45895d04c56\") " Jan 27 13:59:03 crc kubenswrapper[4914]: I0127 13:59:03.876055 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b7fdb3d-4535-46d1-a938-e45895d04c56-utilities\") pod \"9b7fdb3d-4535-46d1-a938-e45895d04c56\" (UID: \"9b7fdb3d-4535-46d1-a938-e45895d04c56\") " Jan 27 13:59:03 crc kubenswrapper[4914]: I0127 13:59:03.876091 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqr6c\" (UniqueName: \"kubernetes.io/projected/9b7fdb3d-4535-46d1-a938-e45895d04c56-kube-api-access-hqr6c\") pod \"9b7fdb3d-4535-46d1-a938-e45895d04c56\" (UID: \"9b7fdb3d-4535-46d1-a938-e45895d04c56\") " Jan 27 13:59:03 crc kubenswrapper[4914]: I0127 13:59:03.878681 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b7fdb3d-4535-46d1-a938-e45895d04c56-utilities" (OuterVolumeSpecName: "utilities") pod "9b7fdb3d-4535-46d1-a938-e45895d04c56" (UID: "9b7fdb3d-4535-46d1-a938-e45895d04c56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:59:03 crc kubenswrapper[4914]: I0127 13:59:03.881256 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b7fdb3d-4535-46d1-a938-e45895d04c56-kube-api-access-hqr6c" (OuterVolumeSpecName: "kube-api-access-hqr6c") pod "9b7fdb3d-4535-46d1-a938-e45895d04c56" (UID: "9b7fdb3d-4535-46d1-a938-e45895d04c56"). InnerVolumeSpecName "kube-api-access-hqr6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:59:03 crc kubenswrapper[4914]: I0127 13:59:03.977437 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqr6c\" (UniqueName: \"kubernetes.io/projected/9b7fdb3d-4535-46d1-a938-e45895d04c56-kube-api-access-hqr6c\") on node \"crc\" DevicePath \"\"" Jan 27 13:59:03 crc kubenswrapper[4914]: I0127 13:59:03.977474 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b7fdb3d-4535-46d1-a938-e45895d04c56-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.005933 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b7fdb3d-4535-46d1-a938-e45895d04c56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b7fdb3d-4535-46d1-a938-e45895d04c56" (UID: "9b7fdb3d-4535-46d1-a938-e45895d04c56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.078903 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b7fdb3d-4535-46d1-a938-e45895d04c56-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.375666 4914 generic.go:334] "Generic (PLEG): container finished" podID="9b7fdb3d-4535-46d1-a938-e45895d04c56" containerID="47874d160c7e7befa67f99b6a61af51f6f427ea9177d17ce73234679c4951ded" exitCode=0 Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.375721 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bpzw" Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.375731 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bpzw" event={"ID":"9b7fdb3d-4535-46d1-a938-e45895d04c56","Type":"ContainerDied","Data":"47874d160c7e7befa67f99b6a61af51f6f427ea9177d17ce73234679c4951ded"} Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.375765 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bpzw" event={"ID":"9b7fdb3d-4535-46d1-a938-e45895d04c56","Type":"ContainerDied","Data":"757b3fc1c9d25b31fe56fb0061aac578fb9a27ba10e72d96a417f4bf8461c74f"} Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.375810 4914 scope.go:117] "RemoveContainer" containerID="47874d160c7e7befa67f99b6a61af51f6f427ea9177d17ce73234679c4951ded" Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.394442 4914 scope.go:117] "RemoveContainer" containerID="7b8fd3e27e467c34640aef042181e19dd5110e710921785a9a76046d98ad28d1" Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.395059 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6bpzw"] Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.398800 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6bpzw"] Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.408383 4914 scope.go:117] "RemoveContainer" containerID="8def9061171b1b8c4e765de4cdd923ae1ae180870b1b6673d0abf78b73df498b" Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.425239 4914 scope.go:117] "RemoveContainer" containerID="47874d160c7e7befa67f99b6a61af51f6f427ea9177d17ce73234679c4951ded" Jan 27 13:59:04 crc kubenswrapper[4914]: E0127 13:59:04.425654 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47874d160c7e7befa67f99b6a61af51f6f427ea9177d17ce73234679c4951ded\": container with ID starting with 47874d160c7e7befa67f99b6a61af51f6f427ea9177d17ce73234679c4951ded not found: ID does not exist" containerID="47874d160c7e7befa67f99b6a61af51f6f427ea9177d17ce73234679c4951ded" Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.425689 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47874d160c7e7befa67f99b6a61af51f6f427ea9177d17ce73234679c4951ded"} err="failed to get container status \"47874d160c7e7befa67f99b6a61af51f6f427ea9177d17ce73234679c4951ded\": rpc error: code = NotFound desc = could not find container \"47874d160c7e7befa67f99b6a61af51f6f427ea9177d17ce73234679c4951ded\": container with ID starting with 47874d160c7e7befa67f99b6a61af51f6f427ea9177d17ce73234679c4951ded not found: ID does not exist" Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.425714 4914 scope.go:117] "RemoveContainer" containerID="7b8fd3e27e467c34640aef042181e19dd5110e710921785a9a76046d98ad28d1" Jan 27 13:59:04 crc kubenswrapper[4914]: E0127 13:59:04.425959 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b8fd3e27e467c34640aef042181e19dd5110e710921785a9a76046d98ad28d1\": container with ID starting with 7b8fd3e27e467c34640aef042181e19dd5110e710921785a9a76046d98ad28d1 not found: ID does not exist" containerID="7b8fd3e27e467c34640aef042181e19dd5110e710921785a9a76046d98ad28d1" Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.425986 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b8fd3e27e467c34640aef042181e19dd5110e710921785a9a76046d98ad28d1"} err="failed to get container status \"7b8fd3e27e467c34640aef042181e19dd5110e710921785a9a76046d98ad28d1\": rpc error: code = NotFound desc = could not find container \"7b8fd3e27e467c34640aef042181e19dd5110e710921785a9a76046d98ad28d1\": container with ID starting with 7b8fd3e27e467c34640aef042181e19dd5110e710921785a9a76046d98ad28d1 not found: ID does not exist" Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.426005 4914 scope.go:117] "RemoveContainer" containerID="8def9061171b1b8c4e765de4cdd923ae1ae180870b1b6673d0abf78b73df498b" Jan 27 13:59:04 crc kubenswrapper[4914]: E0127 13:59:04.426189 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8def9061171b1b8c4e765de4cdd923ae1ae180870b1b6673d0abf78b73df498b\": container with ID starting with 8def9061171b1b8c4e765de4cdd923ae1ae180870b1b6673d0abf78b73df498b not found: ID does not exist" containerID="8def9061171b1b8c4e765de4cdd923ae1ae180870b1b6673d0abf78b73df498b" Jan 27 13:59:04 crc kubenswrapper[4914]: I0127 13:59:04.426208 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8def9061171b1b8c4e765de4cdd923ae1ae180870b1b6673d0abf78b73df498b"} err="failed to get container status \"8def9061171b1b8c4e765de4cdd923ae1ae180870b1b6673d0abf78b73df498b\": rpc error: code = NotFound desc = could not find container \"8def9061171b1b8c4e765de4cdd923ae1ae180870b1b6673d0abf78b73df498b\": container with ID starting with 8def9061171b1b8c4e765de4cdd923ae1ae180870b1b6673d0abf78b73df498b not found: ID does not exist" Jan 27 13:59:06 crc kubenswrapper[4914]: I0127 13:59:06.303551 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b7fdb3d-4535-46d1-a938-e45895d04c56" path="/var/lib/kubelet/pods/9b7fdb3d-4535-46d1-a938-e45895d04c56/volumes" Jan 27 13:59:07 crc kubenswrapper[4914]: I0127 13:59:07.690764 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:59:07 crc kubenswrapper[4914]: I0127 13:59:07.691214 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:59:08 crc kubenswrapper[4914]: I0127 13:59:08.398990 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-g7ck7" event={"ID":"e3d282b5-2dc5-4c0b-9a8f-aace2048b049","Type":"ContainerStarted","Data":"69a3745e8b3d9bfd1aed094083f1dd0c6f36f0df451ec638b9c0742ab361234e"} Jan 27 13:59:08 crc kubenswrapper[4914]: I0127 13:59:08.423346 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-g7ck7" podStartSLOduration=1.880184217 podStartE2EDuration="9.423324926s" podCreationTimestamp="2026-01-27 13:58:59 +0000 UTC" firstStartedPulling="2026-01-27 13:59:00.341891238 +0000 UTC m=+898.654241323" lastFinishedPulling="2026-01-27 13:59:07.885031947 +0000 UTC m=+906.197382032" observedRunningTime="2026-01-27 13:59:08.41803311 +0000 UTC m=+906.730383195" watchObservedRunningTime="2026-01-27 13:59:08.423324926 +0000 UTC m=+906.735675021" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.421392 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-5ltgd"] Jan 27 13:59:09 crc kubenswrapper[4914]: E0127 13:59:09.421688 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7fdb3d-4535-46d1-a938-e45895d04c56" containerName="extract-utilities" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.421707 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7fdb3d-4535-46d1-a938-e45895d04c56" containerName="extract-utilities" Jan 27 13:59:09 crc kubenswrapper[4914]: E0127 13:59:09.421726 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7fdb3d-4535-46d1-a938-e45895d04c56" containerName="extract-content" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.421737 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7fdb3d-4535-46d1-a938-e45895d04c56" containerName="extract-content" Jan 27 13:59:09 crc kubenswrapper[4914]: E0127 13:59:09.421751 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7fdb3d-4535-46d1-a938-e45895d04c56" containerName="registry-server" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.421762 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7fdb3d-4535-46d1-a938-e45895d04c56" containerName="registry-server" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.421937 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b7fdb3d-4535-46d1-a938-e45895d04c56" containerName="registry-server" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.422854 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-5ltgd" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.425621 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7wff4"] Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.425995 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-bk298" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.426275 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7wff4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.431308 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.446157 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-5ltgd"] Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.458793 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cc90236f-8757-4fbb-89a8-b79c69e688e3-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7wff4\" (UID: \"cc90236f-8757-4fbb-89a8-b79c69e688e3\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7wff4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.459264 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbnxp\" (UniqueName: \"kubernetes.io/projected/cc90236f-8757-4fbb-89a8-b79c69e688e3-kube-api-access-sbnxp\") pod \"nmstate-webhook-8474b5b9d8-7wff4\" (UID: \"cc90236f-8757-4fbb-89a8-b79c69e688e3\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7wff4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.459284 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq8z9\" (UniqueName: \"kubernetes.io/projected/497bf40d-5a3c-48a5-90b5-e2bc0566f520-kube-api-access-pq8z9\") pod \"nmstate-metrics-54757c584b-5ltgd\" (UID: \"497bf40d-5a3c-48a5-90b5-e2bc0566f520\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-5ltgd" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.467871 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-sknr4"] Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.468705 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sknr4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.474494 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7wff4"] Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.557664 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w"] Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.558442 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.560887 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cc90236f-8757-4fbb-89a8-b79c69e688e3-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7wff4\" (UID: \"cc90236f-8757-4fbb-89a8-b79c69e688e3\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7wff4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.560942 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mn5m\" (UniqueName: \"kubernetes.io/projected/263a8102-46cc-45f5-b0b9-9d20f072147d-kube-api-access-7mn5m\") pod \"nmstate-handler-sknr4\" (UID: \"263a8102-46cc-45f5-b0b9-9d20f072147d\") " pod="openshift-nmstate/nmstate-handler-sknr4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.560999 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/263a8102-46cc-45f5-b0b9-9d20f072147d-dbus-socket\") pod \"nmstate-handler-sknr4\" (UID: \"263a8102-46cc-45f5-b0b9-9d20f072147d\") " pod="openshift-nmstate/nmstate-handler-sknr4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.561025 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/263a8102-46cc-45f5-b0b9-9d20f072147d-ovs-socket\") pod \"nmstate-handler-sknr4\" (UID: \"263a8102-46cc-45f5-b0b9-9d20f072147d\") " pod="openshift-nmstate/nmstate-handler-sknr4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.561047 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbnxp\" (UniqueName: \"kubernetes.io/projected/cc90236f-8757-4fbb-89a8-b79c69e688e3-kube-api-access-sbnxp\") pod \"nmstate-webhook-8474b5b9d8-7wff4\" (UID: \"cc90236f-8757-4fbb-89a8-b79c69e688e3\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7wff4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.561064 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq8z9\" (UniqueName: \"kubernetes.io/projected/497bf40d-5a3c-48a5-90b5-e2bc0566f520-kube-api-access-pq8z9\") pod \"nmstate-metrics-54757c584b-5ltgd\" (UID: \"497bf40d-5a3c-48a5-90b5-e2bc0566f520\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-5ltgd" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.561096 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/263a8102-46cc-45f5-b0b9-9d20f072147d-nmstate-lock\") pod \"nmstate-handler-sknr4\" (UID: \"263a8102-46cc-45f5-b0b9-9d20f072147d\") " pod="openshift-nmstate/nmstate-handler-sknr4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.561802 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.562154 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.562323 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-fxslq" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.575043 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w"] Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.585795 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cc90236f-8757-4fbb-89a8-b79c69e688e3-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7wff4\" (UID: \"cc90236f-8757-4fbb-89a8-b79c69e688e3\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7wff4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.586927 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq8z9\" (UniqueName: \"kubernetes.io/projected/497bf40d-5a3c-48a5-90b5-e2bc0566f520-kube-api-access-pq8z9\") pod \"nmstate-metrics-54757c584b-5ltgd\" (UID: \"497bf40d-5a3c-48a5-90b5-e2bc0566f520\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-5ltgd" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.591621 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbnxp\" (UniqueName: \"kubernetes.io/projected/cc90236f-8757-4fbb-89a8-b79c69e688e3-kube-api-access-sbnxp\") pod \"nmstate-webhook-8474b5b9d8-7wff4\" (UID: \"cc90236f-8757-4fbb-89a8-b79c69e688e3\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7wff4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.662224 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/263a8102-46cc-45f5-b0b9-9d20f072147d-dbus-socket\") pod \"nmstate-handler-sknr4\" (UID: \"263a8102-46cc-45f5-b0b9-9d20f072147d\") " pod="openshift-nmstate/nmstate-handler-sknr4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.662287 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/263a8102-46cc-45f5-b0b9-9d20f072147d-ovs-socket\") pod \"nmstate-handler-sknr4\" (UID: \"263a8102-46cc-45f5-b0b9-9d20f072147d\") " pod="openshift-nmstate/nmstate-handler-sknr4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.662336 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b29h5\" (UniqueName: \"kubernetes.io/projected/32db9757-0171-406d-807a-103144e273ac-kube-api-access-b29h5\") pod \"nmstate-console-plugin-7754f76f8b-zmn2w\" (UID: \"32db9757-0171-406d-807a-103144e273ac\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.662359 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/32db9757-0171-406d-807a-103144e273ac-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zmn2w\" (UID: \"32db9757-0171-406d-807a-103144e273ac\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.662382 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/263a8102-46cc-45f5-b0b9-9d20f072147d-nmstate-lock\") pod \"nmstate-handler-sknr4\" (UID: \"263a8102-46cc-45f5-b0b9-9d20f072147d\") " pod="openshift-nmstate/nmstate-handler-sknr4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.662428 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mn5m\" (UniqueName: \"kubernetes.io/projected/263a8102-46cc-45f5-b0b9-9d20f072147d-kube-api-access-7mn5m\") pod \"nmstate-handler-sknr4\" (UID: \"263a8102-46cc-45f5-b0b9-9d20f072147d\") " pod="openshift-nmstate/nmstate-handler-sknr4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.662454 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/32db9757-0171-406d-807a-103144e273ac-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zmn2w\" (UID: \"32db9757-0171-406d-807a-103144e273ac\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.662744 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/263a8102-46cc-45f5-b0b9-9d20f072147d-ovs-socket\") pod \"nmstate-handler-sknr4\" (UID: \"263a8102-46cc-45f5-b0b9-9d20f072147d\") " pod="openshift-nmstate/nmstate-handler-sknr4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.662856 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/263a8102-46cc-45f5-b0b9-9d20f072147d-nmstate-lock\") pod \"nmstate-handler-sknr4\" (UID: \"263a8102-46cc-45f5-b0b9-9d20f072147d\") " pod="openshift-nmstate/nmstate-handler-sknr4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.662802 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/263a8102-46cc-45f5-b0b9-9d20f072147d-dbus-socket\") pod \"nmstate-handler-sknr4\" (UID: \"263a8102-46cc-45f5-b0b9-9d20f072147d\") " pod="openshift-nmstate/nmstate-handler-sknr4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.684521 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mn5m\" (UniqueName: \"kubernetes.io/projected/263a8102-46cc-45f5-b0b9-9d20f072147d-kube-api-access-7mn5m\") pod \"nmstate-handler-sknr4\" (UID: \"263a8102-46cc-45f5-b0b9-9d20f072147d\") " pod="openshift-nmstate/nmstate-handler-sknr4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.747590 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-5ltgd" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.764107 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b29h5\" (UniqueName: \"kubernetes.io/projected/32db9757-0171-406d-807a-103144e273ac-kube-api-access-b29h5\") pod \"nmstate-console-plugin-7754f76f8b-zmn2w\" (UID: \"32db9757-0171-406d-807a-103144e273ac\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.764164 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/32db9757-0171-406d-807a-103144e273ac-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zmn2w\" (UID: \"32db9757-0171-406d-807a-103144e273ac\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.764217 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/32db9757-0171-406d-807a-103144e273ac-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zmn2w\" (UID: \"32db9757-0171-406d-807a-103144e273ac\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.765213 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/32db9757-0171-406d-807a-103144e273ac-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zmn2w\" (UID: \"32db9757-0171-406d-807a-103144e273ac\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.766142 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6fdccd656d-ch7mf"] Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.767043 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.770037 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7wff4" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.770980 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/32db9757-0171-406d-807a-103144e273ac-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zmn2w\" (UID: \"32db9757-0171-406d-807a-103144e273ac\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.788257 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fdccd656d-ch7mf"] Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.788912 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b29h5\" (UniqueName: \"kubernetes.io/projected/32db9757-0171-406d-807a-103144e273ac-kube-api-access-b29h5\") pod \"nmstate-console-plugin-7754f76f8b-zmn2w\" (UID: \"32db9757-0171-406d-807a-103144e273ac\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.791049 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sknr4" Jan 27 13:59:09 crc kubenswrapper[4914]: W0127 13:59:09.830740 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod263a8102_46cc_45f5_b0b9_9d20f072147d.slice/crio-d10f8891c74f13867551a67b3c6c310b34b134ec083217e473afec0774157373 WatchSource:0}: Error finding container d10f8891c74f13867551a67b3c6c310b34b134ec083217e473afec0774157373: Status 404 returned error can't find the container with id d10f8891c74f13867551a67b3c6c310b34b134ec083217e473afec0774157373 Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.865667 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-console-oauth-config\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.865975 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-oauth-serving-cert\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.866007 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-console-serving-cert\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.866027 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl7s6\" (UniqueName: \"kubernetes.io/projected/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-kube-api-access-tl7s6\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.866051 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-trusted-ca-bundle\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.866085 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-console-config\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.866102 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-service-ca\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.934543 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.967620 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl7s6\" (UniqueName: \"kubernetes.io/projected/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-kube-api-access-tl7s6\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.967689 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-trusted-ca-bundle\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.967735 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-service-ca\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.967758 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-console-config\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.967858 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-console-oauth-config\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.967887 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-oauth-serving-cert\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.967919 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-console-serving-cert\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.968664 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-console-config\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.968854 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-service-ca\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.968883 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-oauth-serving-cert\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.969229 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-trusted-ca-bundle\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.974363 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-console-oauth-config\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.974843 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-console-serving-cert\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:09 crc kubenswrapper[4914]: I0127 13:59:09.985583 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl7s6\" (UniqueName: \"kubernetes.io/projected/d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa-kube-api-access-tl7s6\") pod \"console-6fdccd656d-ch7mf\" (UID: \"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa\") " pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:10 crc kubenswrapper[4914]: I0127 13:59:10.012137 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-5ltgd"] Jan 27 13:59:10 crc kubenswrapper[4914]: W0127 13:59:10.025673 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod497bf40d_5a3c_48a5_90b5_e2bc0566f520.slice/crio-e3e204ed20975abb954f16ae9d9ef7af15c2b29b2ecc7a5c5e4eaf66ca42f1bd WatchSource:0}: Error finding container e3e204ed20975abb954f16ae9d9ef7af15c2b29b2ecc7a5c5e4eaf66ca42f1bd: Status 404 returned error can't find the container with id e3e204ed20975abb954f16ae9d9ef7af15c2b29b2ecc7a5c5e4eaf66ca42f1bd Jan 27 13:59:10 crc kubenswrapper[4914]: I0127 13:59:10.097952 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:10 crc kubenswrapper[4914]: I0127 13:59:10.125122 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w"] Jan 27 13:59:10 crc kubenswrapper[4914]: W0127 13:59:10.128311 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32db9757_0171_406d_807a_103144e273ac.slice/crio-69b8f6920ca593d0cc31ef227512d58b9dead10e67070522b33aca47ecb348a5 WatchSource:0}: Error finding container 69b8f6920ca593d0cc31ef227512d58b9dead10e67070522b33aca47ecb348a5: Status 404 returned error can't find the container with id 69b8f6920ca593d0cc31ef227512d58b9dead10e67070522b33aca47ecb348a5 Jan 27 13:59:10 crc kubenswrapper[4914]: I0127 13:59:10.165222 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7wff4"] Jan 27 13:59:10 crc kubenswrapper[4914]: I0127 13:59:10.292855 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fdccd656d-ch7mf"] Jan 27 13:59:10 crc kubenswrapper[4914]: W0127 13:59:10.302944 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9515b7b_e95b_4ebf_b1a7_8158dcdca4fa.slice/crio-39279d19bfd5347f380899b4b52a59a92642b77b53fea4ef2db5e30c292caaa5 WatchSource:0}: Error finding container 39279d19bfd5347f380899b4b52a59a92642b77b53fea4ef2db5e30c292caaa5: Status 404 returned error can't find the container with id 39279d19bfd5347f380899b4b52a59a92642b77b53fea4ef2db5e30c292caaa5 Jan 27 13:59:10 crc kubenswrapper[4914]: I0127 13:59:10.445903 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fdccd656d-ch7mf" event={"ID":"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa","Type":"ContainerStarted","Data":"8c3b68d9a081110d59061676dd3fc8d1a3ac3b3f5a65d86551ccacb6a7d12198"} Jan 27 13:59:10 crc kubenswrapper[4914]: I0127 13:59:10.445990 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fdccd656d-ch7mf" event={"ID":"d9515b7b-e95b-4ebf-b1a7-8158dcdca4fa","Type":"ContainerStarted","Data":"39279d19bfd5347f380899b4b52a59a92642b77b53fea4ef2db5e30c292caaa5"} Jan 27 13:59:10 crc kubenswrapper[4914]: I0127 13:59:10.447266 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w" event={"ID":"32db9757-0171-406d-807a-103144e273ac","Type":"ContainerStarted","Data":"69b8f6920ca593d0cc31ef227512d58b9dead10e67070522b33aca47ecb348a5"} Jan 27 13:59:10 crc kubenswrapper[4914]: I0127 13:59:10.448142 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sknr4" event={"ID":"263a8102-46cc-45f5-b0b9-9d20f072147d","Type":"ContainerStarted","Data":"d10f8891c74f13867551a67b3c6c310b34b134ec083217e473afec0774157373"} Jan 27 13:59:10 crc kubenswrapper[4914]: I0127 13:59:10.449274 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7wff4" event={"ID":"cc90236f-8757-4fbb-89a8-b79c69e688e3","Type":"ContainerStarted","Data":"919dfa1dadb2c29090a086e2c02d24bc0e9cfef4447c01f6c2f0fc81d8946c60"} Jan 27 13:59:10 crc kubenswrapper[4914]: I0127 13:59:10.450469 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-5ltgd" event={"ID":"497bf40d-5a3c-48a5-90b5-e2bc0566f520","Type":"ContainerStarted","Data":"e3e204ed20975abb954f16ae9d9ef7af15c2b29b2ecc7a5c5e4eaf66ca42f1bd"} Jan 27 13:59:10 crc kubenswrapper[4914]: I0127 13:59:10.466361 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6fdccd656d-ch7mf" podStartSLOduration=1.466340229 podStartE2EDuration="1.466340229s" podCreationTimestamp="2026-01-27 13:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:59:10.461156419 +0000 UTC m=+908.773506514" watchObservedRunningTime="2026-01-27 13:59:10.466340229 +0000 UTC m=+908.778690304" Jan 27 13:59:13 crc kubenswrapper[4914]: I0127 13:59:13.468597 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-5ltgd" event={"ID":"497bf40d-5a3c-48a5-90b5-e2bc0566f520","Type":"ContainerStarted","Data":"862ca3e436ca79b07f8e20771760ab310f3d3d2eb77d6e12bbdc717f769f3fdf"} Jan 27 13:59:13 crc kubenswrapper[4914]: I0127 13:59:13.469874 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w" event={"ID":"32db9757-0171-406d-807a-103144e273ac","Type":"ContainerStarted","Data":"17c822d9a9f5fece37f7da2f324ee6fda027d0712477b3ccd9abd77455e231b9"} Jan 27 13:59:13 crc kubenswrapper[4914]: I0127 13:59:13.471878 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sknr4" event={"ID":"263a8102-46cc-45f5-b0b9-9d20f072147d","Type":"ContainerStarted","Data":"52bf0fff31ccfe905da289bcc418325d857464b6b982bc4b8e709983555e3cbb"} Jan 27 13:59:13 crc kubenswrapper[4914]: I0127 13:59:13.471974 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-sknr4" Jan 27 13:59:13 crc kubenswrapper[4914]: I0127 13:59:13.475041 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7wff4" event={"ID":"cc90236f-8757-4fbb-89a8-b79c69e688e3","Type":"ContainerStarted","Data":"a6446d574ff1a19f4e0aec997796fbd0413a211ab91b513ea29111b6c1b2af1b"} Jan 27 13:59:13 crc kubenswrapper[4914]: I0127 13:59:13.475192 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7wff4" Jan 27 13:59:13 crc kubenswrapper[4914]: I0127 13:59:13.487481 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zmn2w" podStartSLOduration=1.701437031 podStartE2EDuration="4.487461705s" podCreationTimestamp="2026-01-27 13:59:09 +0000 UTC" firstStartedPulling="2026-01-27 13:59:10.130602529 +0000 UTC m=+908.442952614" lastFinishedPulling="2026-01-27 13:59:12.916627203 +0000 UTC m=+911.228977288" observedRunningTime="2026-01-27 13:59:13.484087033 +0000 UTC m=+911.796437118" watchObservedRunningTime="2026-01-27 13:59:13.487461705 +0000 UTC m=+911.799811790" Jan 27 13:59:13 crc kubenswrapper[4914]: I0127 13:59:13.504040 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-sknr4" podStartSLOduration=1.413932487 podStartE2EDuration="4.504021106s" podCreationTimestamp="2026-01-27 13:59:09 +0000 UTC" firstStartedPulling="2026-01-27 13:59:09.83301308 +0000 UTC m=+908.145363165" lastFinishedPulling="2026-01-27 13:59:12.923101699 +0000 UTC m=+911.235451784" observedRunningTime="2026-01-27 13:59:13.501497067 +0000 UTC m=+911.813847162" watchObservedRunningTime="2026-01-27 13:59:13.504021106 +0000 UTC m=+911.816371181" Jan 27 13:59:13 crc kubenswrapper[4914]: I0127 13:59:13.516967 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7wff4" podStartSLOduration=1.759295092 podStartE2EDuration="4.516948538s" podCreationTimestamp="2026-01-27 13:59:09 +0000 UTC" firstStartedPulling="2026-01-27 13:59:10.188520923 +0000 UTC m=+908.500871008" lastFinishedPulling="2026-01-27 13:59:12.946174369 +0000 UTC m=+911.258524454" observedRunningTime="2026-01-27 13:59:13.516110005 +0000 UTC m=+911.828460090" watchObservedRunningTime="2026-01-27 13:59:13.516948538 +0000 UTC m=+911.829298623" Jan 27 13:59:15 crc kubenswrapper[4914]: I0127 13:59:15.489402 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-5ltgd" event={"ID":"497bf40d-5a3c-48a5-90b5-e2bc0566f520","Type":"ContainerStarted","Data":"e37e11af47634b78b2282eedd038cc6204511f1c6472a400fef04c15acb9bcee"} Jan 27 13:59:15 crc kubenswrapper[4914]: I0127 13:59:15.510401 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-5ltgd" podStartSLOduration=1.282060483 podStartE2EDuration="6.510376786s" podCreationTimestamp="2026-01-27 13:59:09 +0000 UTC" firstStartedPulling="2026-01-27 13:59:10.02903053 +0000 UTC m=+908.341380615" lastFinishedPulling="2026-01-27 13:59:15.257346843 +0000 UTC m=+913.569696918" observedRunningTime="2026-01-27 13:59:15.506593913 +0000 UTC m=+913.818944028" watchObservedRunningTime="2026-01-27 13:59:15.510376786 +0000 UTC m=+913.822726891" Jan 27 13:59:19 crc kubenswrapper[4914]: I0127 13:59:19.841450 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-sknr4" Jan 27 13:59:20 crc kubenswrapper[4914]: I0127 13:59:20.098532 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:20 crc kubenswrapper[4914]: I0127 13:59:20.098941 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:20 crc kubenswrapper[4914]: I0127 13:59:20.104531 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:20 crc kubenswrapper[4914]: I0127 13:59:20.525870 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6fdccd656d-ch7mf" Jan 27 13:59:20 crc kubenswrapper[4914]: I0127 13:59:20.586440 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rktkr"] Jan 27 13:59:29 crc kubenswrapper[4914]: I0127 13:59:29.778736 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7wff4" Jan 27 13:59:37 crc kubenswrapper[4914]: I0127 13:59:37.690656 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:59:37 crc kubenswrapper[4914]: I0127 13:59:37.691208 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:59:42 crc kubenswrapper[4914]: I0127 13:59:42.940235 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv"] Jan 27 13:59:42 crc kubenswrapper[4914]: I0127 13:59:42.946520 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" Jan 27 13:59:42 crc kubenswrapper[4914]: I0127 13:59:42.950414 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 13:59:42 crc kubenswrapper[4914]: I0127 13:59:42.952334 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv"] Jan 27 13:59:43 crc kubenswrapper[4914]: I0127 13:59:43.041530 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv\" (UID: \"c1c7b733-800f-4b1c-93bd-1f5bf1653a64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" Jan 27 13:59:43 crc kubenswrapper[4914]: I0127 13:59:43.041652 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9z42\" (UniqueName: \"kubernetes.io/projected/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-kube-api-access-f9z42\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv\" (UID: \"c1c7b733-800f-4b1c-93bd-1f5bf1653a64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" Jan 27 13:59:43 crc kubenswrapper[4914]: I0127 13:59:43.041736 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv\" (UID: \"c1c7b733-800f-4b1c-93bd-1f5bf1653a64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" Jan 27 13:59:43 crc kubenswrapper[4914]: I0127 13:59:43.143101 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv\" (UID: \"c1c7b733-800f-4b1c-93bd-1f5bf1653a64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" Jan 27 13:59:43 crc kubenswrapper[4914]: I0127 13:59:43.143223 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv\" (UID: \"c1c7b733-800f-4b1c-93bd-1f5bf1653a64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" Jan 27 13:59:43 crc kubenswrapper[4914]: I0127 13:59:43.143287 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9z42\" (UniqueName: \"kubernetes.io/projected/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-kube-api-access-f9z42\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv\" (UID: \"c1c7b733-800f-4b1c-93bd-1f5bf1653a64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" Jan 27 13:59:43 crc kubenswrapper[4914]: I0127 13:59:43.144295 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv\" (UID: \"c1c7b733-800f-4b1c-93bd-1f5bf1653a64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" Jan 27 13:59:43 crc kubenswrapper[4914]: I0127 13:59:43.144444 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv\" (UID: \"c1c7b733-800f-4b1c-93bd-1f5bf1653a64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" Jan 27 13:59:43 crc kubenswrapper[4914]: I0127 13:59:43.171883 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9z42\" (UniqueName: \"kubernetes.io/projected/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-kube-api-access-f9z42\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv\" (UID: \"c1c7b733-800f-4b1c-93bd-1f5bf1653a64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" Jan 27 13:59:43 crc kubenswrapper[4914]: I0127 13:59:43.264446 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" Jan 27 13:59:43 crc kubenswrapper[4914]: I0127 13:59:43.482716 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv"] Jan 27 13:59:43 crc kubenswrapper[4914]: I0127 13:59:43.658105 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" event={"ID":"c1c7b733-800f-4b1c-93bd-1f5bf1653a64","Type":"ContainerStarted","Data":"15146ec29cea56e0954c3056408a0c4daa520f782fa83e74bfbd259be64ef479"} Jan 27 13:59:43 crc kubenswrapper[4914]: I0127 13:59:43.658152 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" event={"ID":"c1c7b733-800f-4b1c-93bd-1f5bf1653a64","Type":"ContainerStarted","Data":"122a2939d7890ef1d717a14f7b69e97745fc2ea0d7d4f0e4d5ec5a5a57d9e70b"} Jan 27 13:59:44 crc kubenswrapper[4914]: I0127 13:59:44.664378 4914 generic.go:334] "Generic (PLEG): container finished" podID="c1c7b733-800f-4b1c-93bd-1f5bf1653a64" containerID="15146ec29cea56e0954c3056408a0c4daa520f782fa83e74bfbd259be64ef479" exitCode=0 Jan 27 13:59:44 crc kubenswrapper[4914]: I0127 13:59:44.664418 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" event={"ID":"c1c7b733-800f-4b1c-93bd-1f5bf1653a64","Type":"ContainerDied","Data":"15146ec29cea56e0954c3056408a0c4daa520f782fa83e74bfbd259be64ef479"} Jan 27 13:59:45 crc kubenswrapper[4914]: I0127 13:59:45.630217 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-rktkr" podUID="90260720-9ce0-4da9-932b-34f7ce235091" containerName="console" containerID="cri-o://f4e498cbfa22d30ec98a7de71d843b62bc07e55205f02abf88fc4eda84dd5bde" gracePeriod=15 Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.762889 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rktkr_90260720-9ce0-4da9-932b-34f7ce235091/console/0.log" Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.763218 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.790746 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-console-config\") pod \"90260720-9ce0-4da9-932b-34f7ce235091\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.790816 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsjfm\" (UniqueName: \"kubernetes.io/projected/90260720-9ce0-4da9-932b-34f7ce235091-kube-api-access-gsjfm\") pod \"90260720-9ce0-4da9-932b-34f7ce235091\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.790901 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-trusted-ca-bundle\") pod \"90260720-9ce0-4da9-932b-34f7ce235091\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.790943 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90260720-9ce0-4da9-932b-34f7ce235091-console-serving-cert\") pod \"90260720-9ce0-4da9-932b-34f7ce235091\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.790982 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-service-ca\") pod \"90260720-9ce0-4da9-932b-34f7ce235091\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.791009 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-oauth-serving-cert\") pod \"90260720-9ce0-4da9-932b-34f7ce235091\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.791035 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90260720-9ce0-4da9-932b-34f7ce235091-console-oauth-config\") pod \"90260720-9ce0-4da9-932b-34f7ce235091\" (UID: \"90260720-9ce0-4da9-932b-34f7ce235091\") " Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.791592 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-service-ca" (OuterVolumeSpecName: "service-ca") pod "90260720-9ce0-4da9-932b-34f7ce235091" (UID: "90260720-9ce0-4da9-932b-34f7ce235091"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.791655 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-console-config" (OuterVolumeSpecName: "console-config") pod "90260720-9ce0-4da9-932b-34f7ce235091" (UID: "90260720-9ce0-4da9-932b-34f7ce235091"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.792217 4914 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.792244 4914 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.792309 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "90260720-9ce0-4da9-932b-34f7ce235091" (UID: "90260720-9ce0-4da9-932b-34f7ce235091"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.792713 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "90260720-9ce0-4da9-932b-34f7ce235091" (UID: "90260720-9ce0-4da9-932b-34f7ce235091"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.796983 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90260720-9ce0-4da9-932b-34f7ce235091-kube-api-access-gsjfm" (OuterVolumeSpecName: "kube-api-access-gsjfm") pod "90260720-9ce0-4da9-932b-34f7ce235091" (UID: "90260720-9ce0-4da9-932b-34f7ce235091"). InnerVolumeSpecName "kube-api-access-gsjfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.798004 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90260720-9ce0-4da9-932b-34f7ce235091-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "90260720-9ce0-4da9-932b-34f7ce235091" (UID: "90260720-9ce0-4da9-932b-34f7ce235091"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.798551 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90260720-9ce0-4da9-932b-34f7ce235091-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "90260720-9ce0-4da9-932b-34f7ce235091" (UID: "90260720-9ce0-4da9-932b-34f7ce235091"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.892990 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsjfm\" (UniqueName: \"kubernetes.io/projected/90260720-9ce0-4da9-932b-34f7ce235091-kube-api-access-gsjfm\") on node \"crc\" DevicePath \"\"" Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.893030 4914 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.893039 4914 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90260720-9ce0-4da9-932b-34f7ce235091-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.893048 4914 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90260720-9ce0-4da9-932b-34f7ce235091-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:59:46 crc kubenswrapper[4914]: I0127 13:59:46.893058 4914 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90260720-9ce0-4da9-932b-34f7ce235091-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:59:47 crc kubenswrapper[4914]: I0127 13:59:47.276985 4914 generic.go:334] "Generic (PLEG): container finished" podID="c1c7b733-800f-4b1c-93bd-1f5bf1653a64" containerID="96c9c522a1d5f0de31693c1eaf1e65c2a0772d2e06c84f9873165854a5caa2b3" exitCode=0 Jan 27 13:59:47 crc kubenswrapper[4914]: I0127 13:59:47.278523 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" event={"ID":"c1c7b733-800f-4b1c-93bd-1f5bf1653a64","Type":"ContainerDied","Data":"96c9c522a1d5f0de31693c1eaf1e65c2a0772d2e06c84f9873165854a5caa2b3"} Jan 27 13:59:47 crc kubenswrapper[4914]: I0127 13:59:47.297145 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rktkr_90260720-9ce0-4da9-932b-34f7ce235091/console/0.log" Jan 27 13:59:47 crc kubenswrapper[4914]: I0127 13:59:47.297195 4914 generic.go:334] "Generic (PLEG): container finished" podID="90260720-9ce0-4da9-932b-34f7ce235091" containerID="f4e498cbfa22d30ec98a7de71d843b62bc07e55205f02abf88fc4eda84dd5bde" exitCode=2 Jan 27 13:59:47 crc kubenswrapper[4914]: I0127 13:59:47.297220 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rktkr" event={"ID":"90260720-9ce0-4da9-932b-34f7ce235091","Type":"ContainerDied","Data":"f4e498cbfa22d30ec98a7de71d843b62bc07e55205f02abf88fc4eda84dd5bde"} Jan 27 13:59:47 crc kubenswrapper[4914]: I0127 13:59:47.297252 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rktkr" event={"ID":"90260720-9ce0-4da9-932b-34f7ce235091","Type":"ContainerDied","Data":"6d3d95507d8f40d5398fc9321dd2724a7c3f834e8abe92e4b0a48ee6d25bb3d3"} Jan 27 13:59:47 crc kubenswrapper[4914]: I0127 13:59:47.297268 4914 scope.go:117] "RemoveContainer" containerID="f4e498cbfa22d30ec98a7de71d843b62bc07e55205f02abf88fc4eda84dd5bde" Jan 27 13:59:47 crc kubenswrapper[4914]: I0127 13:59:47.297306 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rktkr" Jan 27 13:59:47 crc kubenswrapper[4914]: I0127 13:59:47.318022 4914 scope.go:117] "RemoveContainer" containerID="f4e498cbfa22d30ec98a7de71d843b62bc07e55205f02abf88fc4eda84dd5bde" Jan 27 13:59:47 crc kubenswrapper[4914]: E0127 13:59:47.318690 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4e498cbfa22d30ec98a7de71d843b62bc07e55205f02abf88fc4eda84dd5bde\": container with ID starting with f4e498cbfa22d30ec98a7de71d843b62bc07e55205f02abf88fc4eda84dd5bde not found: ID does not exist" containerID="f4e498cbfa22d30ec98a7de71d843b62bc07e55205f02abf88fc4eda84dd5bde" Jan 27 13:59:47 crc kubenswrapper[4914]: I0127 13:59:47.318739 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4e498cbfa22d30ec98a7de71d843b62bc07e55205f02abf88fc4eda84dd5bde"} err="failed to get container status \"f4e498cbfa22d30ec98a7de71d843b62bc07e55205f02abf88fc4eda84dd5bde\": rpc error: code = NotFound desc = could not find container \"f4e498cbfa22d30ec98a7de71d843b62bc07e55205f02abf88fc4eda84dd5bde\": container with ID starting with f4e498cbfa22d30ec98a7de71d843b62bc07e55205f02abf88fc4eda84dd5bde not found: ID does not exist" Jan 27 13:59:47 crc kubenswrapper[4914]: I0127 13:59:47.331642 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rktkr"] Jan 27 13:59:47 crc kubenswrapper[4914]: I0127 13:59:47.334972 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-rktkr"] Jan 27 13:59:48 crc kubenswrapper[4914]: I0127 13:59:48.303306 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90260720-9ce0-4da9-932b-34f7ce235091" path="/var/lib/kubelet/pods/90260720-9ce0-4da9-932b-34f7ce235091/volumes" Jan 27 13:59:48 crc kubenswrapper[4914]: I0127 13:59:48.307253 4914 generic.go:334] "Generic (PLEG): container finished" podID="c1c7b733-800f-4b1c-93bd-1f5bf1653a64" containerID="51692cb8dfccd2d42cf7092d6087254889916ca2712de3e8742101bc34db4f08" exitCode=0 Jan 27 13:59:48 crc kubenswrapper[4914]: I0127 13:59:48.307297 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" event={"ID":"c1c7b733-800f-4b1c-93bd-1f5bf1653a64","Type":"ContainerDied","Data":"51692cb8dfccd2d42cf7092d6087254889916ca2712de3e8742101bc34db4f08"} Jan 27 13:59:49 crc kubenswrapper[4914]: I0127 13:59:49.541044 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" Jan 27 13:59:49 crc kubenswrapper[4914]: I0127 13:59:49.628521 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-util\") pod \"c1c7b733-800f-4b1c-93bd-1f5bf1653a64\" (UID: \"c1c7b733-800f-4b1c-93bd-1f5bf1653a64\") " Jan 27 13:59:49 crc kubenswrapper[4914]: I0127 13:59:49.628689 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9z42\" (UniqueName: \"kubernetes.io/projected/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-kube-api-access-f9z42\") pod \"c1c7b733-800f-4b1c-93bd-1f5bf1653a64\" (UID: \"c1c7b733-800f-4b1c-93bd-1f5bf1653a64\") " Jan 27 13:59:49 crc kubenswrapper[4914]: I0127 13:59:49.628771 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-bundle\") pod \"c1c7b733-800f-4b1c-93bd-1f5bf1653a64\" (UID: \"c1c7b733-800f-4b1c-93bd-1f5bf1653a64\") " Jan 27 13:59:49 crc kubenswrapper[4914]: I0127 13:59:49.629695 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-bundle" (OuterVolumeSpecName: "bundle") pod "c1c7b733-800f-4b1c-93bd-1f5bf1653a64" (UID: "c1c7b733-800f-4b1c-93bd-1f5bf1653a64"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:59:49 crc kubenswrapper[4914]: I0127 13:59:49.634054 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-kube-api-access-f9z42" (OuterVolumeSpecName: "kube-api-access-f9z42") pod "c1c7b733-800f-4b1c-93bd-1f5bf1653a64" (UID: "c1c7b733-800f-4b1c-93bd-1f5bf1653a64"). InnerVolumeSpecName "kube-api-access-f9z42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:59:49 crc kubenswrapper[4914]: I0127 13:59:49.639299 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-util" (OuterVolumeSpecName: "util") pod "c1c7b733-800f-4b1c-93bd-1f5bf1653a64" (UID: "c1c7b733-800f-4b1c-93bd-1f5bf1653a64"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:59:49 crc kubenswrapper[4914]: I0127 13:59:49.729656 4914 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:59:49 crc kubenswrapper[4914]: I0127 13:59:49.729687 4914 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-util\") on node \"crc\" DevicePath \"\"" Jan 27 13:59:49 crc kubenswrapper[4914]: I0127 13:59:49.729698 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9z42\" (UniqueName: \"kubernetes.io/projected/c1c7b733-800f-4b1c-93bd-1f5bf1653a64-kube-api-access-f9z42\") on node \"crc\" DevicePath \"\"" Jan 27 13:59:50 crc kubenswrapper[4914]: I0127 13:59:50.321753 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" event={"ID":"c1c7b733-800f-4b1c-93bd-1f5bf1653a64","Type":"ContainerDied","Data":"122a2939d7890ef1d717a14f7b69e97745fc2ea0d7d4f0e4d5ec5a5a57d9e70b"} Jan 27 13:59:50 crc kubenswrapper[4914]: I0127 13:59:50.321797 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="122a2939d7890ef1d717a14f7b69e97745fc2ea0d7d4f0e4d5ec5a5a57d9e70b" Jan 27 13:59:50 crc kubenswrapper[4914]: I0127 13:59:50.321830 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.693683 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7"] Jan 27 13:59:58 crc kubenswrapper[4914]: E0127 13:59:58.695478 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c7b733-800f-4b1c-93bd-1f5bf1653a64" containerName="pull" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.695596 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c7b733-800f-4b1c-93bd-1f5bf1653a64" containerName="pull" Jan 27 13:59:58 crc kubenswrapper[4914]: E0127 13:59:58.695666 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90260720-9ce0-4da9-932b-34f7ce235091" containerName="console" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.695735 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="90260720-9ce0-4da9-932b-34f7ce235091" containerName="console" Jan 27 13:59:58 crc kubenswrapper[4914]: E0127 13:59:58.695851 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c7b733-800f-4b1c-93bd-1f5bf1653a64" containerName="util" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.695918 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c7b733-800f-4b1c-93bd-1f5bf1653a64" containerName="util" Jan 27 13:59:58 crc kubenswrapper[4914]: E0127 13:59:58.695984 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c7b733-800f-4b1c-93bd-1f5bf1653a64" containerName="extract" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.696048 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c7b733-800f-4b1c-93bd-1f5bf1653a64" containerName="extract" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.696240 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="90260720-9ce0-4da9-932b-34f7ce235091" containerName="console" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.696314 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1c7b733-800f-4b1c-93bd-1f5bf1653a64" containerName="extract" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.696863 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.705130 4914 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.706944 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.708145 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.711009 4914 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-8strv" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.711018 4914 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.721504 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7"] Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.740102 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77a4ae14-5fc9-461d-b886-b0dee70471ed-apiservice-cert\") pod \"metallb-operator-controller-manager-6fb95778d6-wt7m7\" (UID: \"77a4ae14-5fc9-461d-b886-b0dee70471ed\") " pod="metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.740154 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcdnh\" (UniqueName: \"kubernetes.io/projected/77a4ae14-5fc9-461d-b886-b0dee70471ed-kube-api-access-qcdnh\") pod \"metallb-operator-controller-manager-6fb95778d6-wt7m7\" (UID: \"77a4ae14-5fc9-461d-b886-b0dee70471ed\") " pod="metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.740207 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77a4ae14-5fc9-461d-b886-b0dee70471ed-webhook-cert\") pod \"metallb-operator-controller-manager-6fb95778d6-wt7m7\" (UID: \"77a4ae14-5fc9-461d-b886-b0dee70471ed\") " pod="metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.841579 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77a4ae14-5fc9-461d-b886-b0dee70471ed-apiservice-cert\") pod \"metallb-operator-controller-manager-6fb95778d6-wt7m7\" (UID: \"77a4ae14-5fc9-461d-b886-b0dee70471ed\") " pod="metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.841819 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcdnh\" (UniqueName: \"kubernetes.io/projected/77a4ae14-5fc9-461d-b886-b0dee70471ed-kube-api-access-qcdnh\") pod \"metallb-operator-controller-manager-6fb95778d6-wt7m7\" (UID: \"77a4ae14-5fc9-461d-b886-b0dee70471ed\") " pod="metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.841943 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77a4ae14-5fc9-461d-b886-b0dee70471ed-webhook-cert\") pod \"metallb-operator-controller-manager-6fb95778d6-wt7m7\" (UID: \"77a4ae14-5fc9-461d-b886-b0dee70471ed\") " pod="metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.847615 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77a4ae14-5fc9-461d-b886-b0dee70471ed-apiservice-cert\") pod \"metallb-operator-controller-manager-6fb95778d6-wt7m7\" (UID: \"77a4ae14-5fc9-461d-b886-b0dee70471ed\") " pod="metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.858011 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77a4ae14-5fc9-461d-b886-b0dee70471ed-webhook-cert\") pod \"metallb-operator-controller-manager-6fb95778d6-wt7m7\" (UID: \"77a4ae14-5fc9-461d-b886-b0dee70471ed\") " pod="metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.867354 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcdnh\" (UniqueName: \"kubernetes.io/projected/77a4ae14-5fc9-461d-b886-b0dee70471ed-kube-api-access-qcdnh\") pod \"metallb-operator-controller-manager-6fb95778d6-wt7m7\" (UID: \"77a4ae14-5fc9-461d-b886-b0dee70471ed\") " pod="metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.941259 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd"] Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.942423 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.944561 4914 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.948510 4914 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.952877 4914 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-k2qd7" Jan 27 13:59:58 crc kubenswrapper[4914]: I0127 13:59:58.963087 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd"] Jan 27 13:59:59 crc kubenswrapper[4914]: I0127 13:59:59.014642 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7" Jan 27 13:59:59 crc kubenswrapper[4914]: I0127 13:59:59.044961 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79390e5c-67e1-4a23-82c5-4c0bc346586b-apiservice-cert\") pod \"metallb-operator-webhook-server-6444797f4c-hnlrd\" (UID: \"79390e5c-67e1-4a23-82c5-4c0bc346586b\") " pod="metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd" Jan 27 13:59:59 crc kubenswrapper[4914]: I0127 13:59:59.045417 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffggg\" (UniqueName: \"kubernetes.io/projected/79390e5c-67e1-4a23-82c5-4c0bc346586b-kube-api-access-ffggg\") pod \"metallb-operator-webhook-server-6444797f4c-hnlrd\" (UID: \"79390e5c-67e1-4a23-82c5-4c0bc346586b\") " pod="metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd" Jan 27 13:59:59 crc kubenswrapper[4914]: I0127 13:59:59.045582 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79390e5c-67e1-4a23-82c5-4c0bc346586b-webhook-cert\") pod \"metallb-operator-webhook-server-6444797f4c-hnlrd\" (UID: \"79390e5c-67e1-4a23-82c5-4c0bc346586b\") " pod="metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd" Jan 27 13:59:59 crc kubenswrapper[4914]: I0127 13:59:59.147046 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffggg\" (UniqueName: \"kubernetes.io/projected/79390e5c-67e1-4a23-82c5-4c0bc346586b-kube-api-access-ffggg\") pod \"metallb-operator-webhook-server-6444797f4c-hnlrd\" (UID: \"79390e5c-67e1-4a23-82c5-4c0bc346586b\") " pod="metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd" Jan 27 13:59:59 crc kubenswrapper[4914]: I0127 13:59:59.147094 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79390e5c-67e1-4a23-82c5-4c0bc346586b-webhook-cert\") pod \"metallb-operator-webhook-server-6444797f4c-hnlrd\" (UID: \"79390e5c-67e1-4a23-82c5-4c0bc346586b\") " pod="metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd" Jan 27 13:59:59 crc kubenswrapper[4914]: I0127 13:59:59.147166 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79390e5c-67e1-4a23-82c5-4c0bc346586b-apiservice-cert\") pod \"metallb-operator-webhook-server-6444797f4c-hnlrd\" (UID: \"79390e5c-67e1-4a23-82c5-4c0bc346586b\") " pod="metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd" Jan 27 13:59:59 crc kubenswrapper[4914]: I0127 13:59:59.156519 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79390e5c-67e1-4a23-82c5-4c0bc346586b-webhook-cert\") pod \"metallb-operator-webhook-server-6444797f4c-hnlrd\" (UID: \"79390e5c-67e1-4a23-82c5-4c0bc346586b\") " pod="metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd" Jan 27 13:59:59 crc kubenswrapper[4914]: I0127 13:59:59.163440 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79390e5c-67e1-4a23-82c5-4c0bc346586b-apiservice-cert\") pod \"metallb-operator-webhook-server-6444797f4c-hnlrd\" (UID: \"79390e5c-67e1-4a23-82c5-4c0bc346586b\") " pod="metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd" Jan 27 13:59:59 crc kubenswrapper[4914]: I0127 13:59:59.166601 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffggg\" (UniqueName: \"kubernetes.io/projected/79390e5c-67e1-4a23-82c5-4c0bc346586b-kube-api-access-ffggg\") pod \"metallb-operator-webhook-server-6444797f4c-hnlrd\" (UID: \"79390e5c-67e1-4a23-82c5-4c0bc346586b\") " pod="metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd" Jan 27 13:59:59 crc kubenswrapper[4914]: I0127 13:59:59.255244 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd" Jan 27 13:59:59 crc kubenswrapper[4914]: I0127 13:59:59.255981 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7"] Jan 27 13:59:59 crc kubenswrapper[4914]: W0127 13:59:59.262930 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77a4ae14_5fc9_461d_b886_b0dee70471ed.slice/crio-888e349c97fa9e1fb51036df86988f3b61e40d0ab52ecbb00fda637dad022765 WatchSource:0}: Error finding container 888e349c97fa9e1fb51036df86988f3b61e40d0ab52ecbb00fda637dad022765: Status 404 returned error can't find the container with id 888e349c97fa9e1fb51036df86988f3b61e40d0ab52ecbb00fda637dad022765 Jan 27 13:59:59 crc kubenswrapper[4914]: I0127 13:59:59.391964 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7" event={"ID":"77a4ae14-5fc9-461d-b886-b0dee70471ed","Type":"ContainerStarted","Data":"888e349c97fa9e1fb51036df86988f3b61e40d0ab52ecbb00fda637dad022765"} Jan 27 13:59:59 crc kubenswrapper[4914]: I0127 13:59:59.700463 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd"] Jan 27 13:59:59 crc kubenswrapper[4914]: W0127 13:59:59.705902 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79390e5c_67e1_4a23_82c5_4c0bc346586b.slice/crio-1475513b782aa7dc359e37ac9f4f598e18e858de213d75957040e56874589764 WatchSource:0}: Error finding container 1475513b782aa7dc359e37ac9f4f598e18e858de213d75957040e56874589764: Status 404 returned error can't find the container with id 1475513b782aa7dc359e37ac9f4f598e18e858de213d75957040e56874589764 Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.143124 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt"] Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.143929 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt" Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.148291 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.150222 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.159495 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/245bc0a3-6510-45cc-8040-0b1c2435436d-secret-volume\") pod \"collect-profiles-29492040-8s5xt\" (UID: \"245bc0a3-6510-45cc-8040-0b1c2435436d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt" Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.159617 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klftq\" (UniqueName: \"kubernetes.io/projected/245bc0a3-6510-45cc-8040-0b1c2435436d-kube-api-access-klftq\") pod \"collect-profiles-29492040-8s5xt\" (UID: \"245bc0a3-6510-45cc-8040-0b1c2435436d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt" Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.159660 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/245bc0a3-6510-45cc-8040-0b1c2435436d-config-volume\") pod \"collect-profiles-29492040-8s5xt\" (UID: \"245bc0a3-6510-45cc-8040-0b1c2435436d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt" Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.171198 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt"] Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.261457 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klftq\" (UniqueName: \"kubernetes.io/projected/245bc0a3-6510-45cc-8040-0b1c2435436d-kube-api-access-klftq\") pod \"collect-profiles-29492040-8s5xt\" (UID: \"245bc0a3-6510-45cc-8040-0b1c2435436d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt" Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.261568 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/245bc0a3-6510-45cc-8040-0b1c2435436d-config-volume\") pod \"collect-profiles-29492040-8s5xt\" (UID: \"245bc0a3-6510-45cc-8040-0b1c2435436d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt" Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.261663 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/245bc0a3-6510-45cc-8040-0b1c2435436d-secret-volume\") pod \"collect-profiles-29492040-8s5xt\" (UID: \"245bc0a3-6510-45cc-8040-0b1c2435436d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt" Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.262966 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/245bc0a3-6510-45cc-8040-0b1c2435436d-config-volume\") pod \"collect-profiles-29492040-8s5xt\" (UID: \"245bc0a3-6510-45cc-8040-0b1c2435436d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt" Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.267574 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/245bc0a3-6510-45cc-8040-0b1c2435436d-secret-volume\") pod \"collect-profiles-29492040-8s5xt\" (UID: \"245bc0a3-6510-45cc-8040-0b1c2435436d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt" Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.309686 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klftq\" (UniqueName: \"kubernetes.io/projected/245bc0a3-6510-45cc-8040-0b1c2435436d-kube-api-access-klftq\") pod \"collect-profiles-29492040-8s5xt\" (UID: \"245bc0a3-6510-45cc-8040-0b1c2435436d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt" Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.398579 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd" event={"ID":"79390e5c-67e1-4a23-82c5-4c0bc346586b","Type":"ContainerStarted","Data":"1475513b782aa7dc359e37ac9f4f598e18e858de213d75957040e56874589764"} Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.464225 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt" Jan 27 14:00:00 crc kubenswrapper[4914]: I0127 14:00:00.752085 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt"] Jan 27 14:00:00 crc kubenswrapper[4914]: W0127 14:00:00.758108 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod245bc0a3_6510_45cc_8040_0b1c2435436d.slice/crio-b90fbc8af984470e411485f9380ebfc17d3435e019cec8a759723ad7a4d7dfef WatchSource:0}: Error finding container b90fbc8af984470e411485f9380ebfc17d3435e019cec8a759723ad7a4d7dfef: Status 404 returned error can't find the container with id b90fbc8af984470e411485f9380ebfc17d3435e019cec8a759723ad7a4d7dfef Jan 27 14:00:01 crc kubenswrapper[4914]: I0127 14:00:01.406917 4914 generic.go:334] "Generic (PLEG): container finished" podID="245bc0a3-6510-45cc-8040-0b1c2435436d" containerID="c11acf5dd249c0e39eb4f0dfbba5ef8c775206ec3d853160956e949b15e77da3" exitCode=0 Jan 27 14:00:01 crc kubenswrapper[4914]: I0127 14:00:01.407025 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt" event={"ID":"245bc0a3-6510-45cc-8040-0b1c2435436d","Type":"ContainerDied","Data":"c11acf5dd249c0e39eb4f0dfbba5ef8c775206ec3d853160956e949b15e77da3"} Jan 27 14:00:01 crc kubenswrapper[4914]: I0127 14:00:01.408021 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt" event={"ID":"245bc0a3-6510-45cc-8040-0b1c2435436d","Type":"ContainerStarted","Data":"b90fbc8af984470e411485f9380ebfc17d3435e019cec8a759723ad7a4d7dfef"} Jan 27 14:00:02 crc kubenswrapper[4914]: I0127 14:00:02.702168 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt" Jan 27 14:00:02 crc kubenswrapper[4914]: I0127 14:00:02.892385 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klftq\" (UniqueName: \"kubernetes.io/projected/245bc0a3-6510-45cc-8040-0b1c2435436d-kube-api-access-klftq\") pod \"245bc0a3-6510-45cc-8040-0b1c2435436d\" (UID: \"245bc0a3-6510-45cc-8040-0b1c2435436d\") " Jan 27 14:00:02 crc kubenswrapper[4914]: I0127 14:00:02.892479 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/245bc0a3-6510-45cc-8040-0b1c2435436d-secret-volume\") pod \"245bc0a3-6510-45cc-8040-0b1c2435436d\" (UID: \"245bc0a3-6510-45cc-8040-0b1c2435436d\") " Jan 27 14:00:02 crc kubenswrapper[4914]: I0127 14:00:02.892589 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/245bc0a3-6510-45cc-8040-0b1c2435436d-config-volume\") pod \"245bc0a3-6510-45cc-8040-0b1c2435436d\" (UID: \"245bc0a3-6510-45cc-8040-0b1c2435436d\") " Jan 27 14:00:02 crc kubenswrapper[4914]: I0127 14:00:02.894401 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/245bc0a3-6510-45cc-8040-0b1c2435436d-config-volume" (OuterVolumeSpecName: "config-volume") pod "245bc0a3-6510-45cc-8040-0b1c2435436d" (UID: "245bc0a3-6510-45cc-8040-0b1c2435436d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:00:02 crc kubenswrapper[4914]: I0127 14:00:02.898674 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/245bc0a3-6510-45cc-8040-0b1c2435436d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "245bc0a3-6510-45cc-8040-0b1c2435436d" (UID: "245bc0a3-6510-45cc-8040-0b1c2435436d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:00:02 crc kubenswrapper[4914]: I0127 14:00:02.899052 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/245bc0a3-6510-45cc-8040-0b1c2435436d-kube-api-access-klftq" (OuterVolumeSpecName: "kube-api-access-klftq") pod "245bc0a3-6510-45cc-8040-0b1c2435436d" (UID: "245bc0a3-6510-45cc-8040-0b1c2435436d"). InnerVolumeSpecName "kube-api-access-klftq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:00:02 crc kubenswrapper[4914]: I0127 14:00:02.995216 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klftq\" (UniqueName: \"kubernetes.io/projected/245bc0a3-6510-45cc-8040-0b1c2435436d-kube-api-access-klftq\") on node \"crc\" DevicePath \"\"" Jan 27 14:00:02 crc kubenswrapper[4914]: I0127 14:00:02.995279 4914 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/245bc0a3-6510-45cc-8040-0b1c2435436d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:00:02 crc kubenswrapper[4914]: I0127 14:00:02.995296 4914 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/245bc0a3-6510-45cc-8040-0b1c2435436d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:00:03 crc kubenswrapper[4914]: I0127 14:00:03.422409 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt" event={"ID":"245bc0a3-6510-45cc-8040-0b1c2435436d","Type":"ContainerDied","Data":"b90fbc8af984470e411485f9380ebfc17d3435e019cec8a759723ad7a4d7dfef"} Jan 27 14:00:03 crc kubenswrapper[4914]: I0127 14:00:03.422824 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b90fbc8af984470e411485f9380ebfc17d3435e019cec8a759723ad7a4d7dfef" Jan 27 14:00:03 crc kubenswrapper[4914]: I0127 14:00:03.422937 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt" Jan 27 14:00:04 crc kubenswrapper[4914]: I0127 14:00:04.432545 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7" event={"ID":"77a4ae14-5fc9-461d-b886-b0dee70471ed","Type":"ContainerStarted","Data":"c0e37eadd961959ad2a0efaa31a9d41ee1896b2ea290eeb4c64448ddfec087e4"} Jan 27 14:00:04 crc kubenswrapper[4914]: I0127 14:00:04.433160 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7" Jan 27 14:00:04 crc kubenswrapper[4914]: I0127 14:00:04.454640 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7" podStartSLOduration=1.960888417 podStartE2EDuration="6.454618301s" podCreationTimestamp="2026-01-27 13:59:58 +0000 UTC" firstStartedPulling="2026-01-27 13:59:59.264937137 +0000 UTC m=+957.577287222" lastFinishedPulling="2026-01-27 14:00:03.758667021 +0000 UTC m=+962.071017106" observedRunningTime="2026-01-27 14:00:04.451895307 +0000 UTC m=+962.764245392" watchObservedRunningTime="2026-01-27 14:00:04.454618301 +0000 UTC m=+962.766968386" Jan 27 14:00:07 crc kubenswrapper[4914]: I0127 14:00:07.691668 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:00:07 crc kubenswrapper[4914]: I0127 14:00:07.692123 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:00:07 crc kubenswrapper[4914]: I0127 14:00:07.692166 4914 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 14:00:07 crc kubenswrapper[4914]: I0127 14:00:07.692691 4914 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0fd1f806130ae08db1dd18a20f06b6fe85e397f8f5aa2658045094f139caa41"} pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:00:07 crc kubenswrapper[4914]: I0127 14:00:07.692735 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" containerID="cri-o://a0fd1f806130ae08db1dd18a20f06b6fe85e397f8f5aa2658045094f139caa41" gracePeriod=600 Jan 27 14:00:08 crc kubenswrapper[4914]: I0127 14:00:08.458503 4914 generic.go:334] "Generic (PLEG): container finished" podID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerID="a0fd1f806130ae08db1dd18a20f06b6fe85e397f8f5aa2658045094f139caa41" exitCode=0 Jan 27 14:00:08 crc kubenswrapper[4914]: I0127 14:00:08.458582 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerDied","Data":"a0fd1f806130ae08db1dd18a20f06b6fe85e397f8f5aa2658045094f139caa41"} Jan 27 14:00:08 crc kubenswrapper[4914]: I0127 14:00:08.458820 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerStarted","Data":"1eaab6549a5f3b2138f4d755eedcddc2ad9f911aba3a749ef9e6dd2fe3f38be3"} Jan 27 14:00:08 crc kubenswrapper[4914]: I0127 14:00:08.458889 4914 scope.go:117] "RemoveContainer" containerID="37dc1ebac0798b9157fdcba67221027c4ccb916dafe67ba310b9792bc6166b37" Jan 27 14:00:14 crc kubenswrapper[4914]: I0127 14:00:14.897582 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pg4dl"] Jan 27 14:00:14 crc kubenswrapper[4914]: E0127 14:00:14.898393 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="245bc0a3-6510-45cc-8040-0b1c2435436d" containerName="collect-profiles" Jan 27 14:00:14 crc kubenswrapper[4914]: I0127 14:00:14.898412 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="245bc0a3-6510-45cc-8040-0b1c2435436d" containerName="collect-profiles" Jan 27 14:00:14 crc kubenswrapper[4914]: I0127 14:00:14.898529 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="245bc0a3-6510-45cc-8040-0b1c2435436d" containerName="collect-profiles" Jan 27 14:00:14 crc kubenswrapper[4914]: I0127 14:00:14.899410 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:14 crc kubenswrapper[4914]: I0127 14:00:14.909082 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg4dl"] Jan 27 14:00:15 crc kubenswrapper[4914]: I0127 14:00:15.063568 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-utilities\") pod \"redhat-marketplace-pg4dl\" (UID: \"e8ddcbb1-52ed-4029-baa4-03cd6f140d80\") " pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:15 crc kubenswrapper[4914]: I0127 14:00:15.063609 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fzsd\" (UniqueName: \"kubernetes.io/projected/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-kube-api-access-9fzsd\") pod \"redhat-marketplace-pg4dl\" (UID: \"e8ddcbb1-52ed-4029-baa4-03cd6f140d80\") " pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:15 crc kubenswrapper[4914]: I0127 14:00:15.063656 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-catalog-content\") pod \"redhat-marketplace-pg4dl\" (UID: \"e8ddcbb1-52ed-4029-baa4-03cd6f140d80\") " pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:15 crc kubenswrapper[4914]: I0127 14:00:15.164794 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-utilities\") pod \"redhat-marketplace-pg4dl\" (UID: \"e8ddcbb1-52ed-4029-baa4-03cd6f140d80\") " pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:15 crc kubenswrapper[4914]: I0127 14:00:15.164876 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fzsd\" (UniqueName: \"kubernetes.io/projected/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-kube-api-access-9fzsd\") pod \"redhat-marketplace-pg4dl\" (UID: \"e8ddcbb1-52ed-4029-baa4-03cd6f140d80\") " pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:15 crc kubenswrapper[4914]: I0127 14:00:15.164942 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-catalog-content\") pod \"redhat-marketplace-pg4dl\" (UID: \"e8ddcbb1-52ed-4029-baa4-03cd6f140d80\") " pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:15 crc kubenswrapper[4914]: I0127 14:00:15.165288 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-utilities\") pod \"redhat-marketplace-pg4dl\" (UID: \"e8ddcbb1-52ed-4029-baa4-03cd6f140d80\") " pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:15 crc kubenswrapper[4914]: I0127 14:00:15.165433 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-catalog-content\") pod \"redhat-marketplace-pg4dl\" (UID: \"e8ddcbb1-52ed-4029-baa4-03cd6f140d80\") " pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:15 crc kubenswrapper[4914]: I0127 14:00:15.192109 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fzsd\" (UniqueName: \"kubernetes.io/projected/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-kube-api-access-9fzsd\") pod \"redhat-marketplace-pg4dl\" (UID: \"e8ddcbb1-52ed-4029-baa4-03cd6f140d80\") " pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:15 crc kubenswrapper[4914]: I0127 14:00:15.267276 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:15 crc kubenswrapper[4914]: I0127 14:00:15.503429 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg4dl"] Jan 27 14:00:16 crc kubenswrapper[4914]: I0127 14:00:16.502943 4914 generic.go:334] "Generic (PLEG): container finished" podID="e8ddcbb1-52ed-4029-baa4-03cd6f140d80" containerID="407ecfba92e4abd805baba74a6a585870c80a8c92afbc3b3239060ad0786c928" exitCode=0 Jan 27 14:00:16 crc kubenswrapper[4914]: I0127 14:00:16.503241 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg4dl" event={"ID":"e8ddcbb1-52ed-4029-baa4-03cd6f140d80","Type":"ContainerDied","Data":"407ecfba92e4abd805baba74a6a585870c80a8c92afbc3b3239060ad0786c928"} Jan 27 14:00:16 crc kubenswrapper[4914]: I0127 14:00:16.503268 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg4dl" event={"ID":"e8ddcbb1-52ed-4029-baa4-03cd6f140d80","Type":"ContainerStarted","Data":"ab6ecdb344ae0acbd1676d3c8150eb5fe80d7a44bc9ae93a95eb6e6d34115443"} Jan 27 14:00:17 crc kubenswrapper[4914]: I0127 14:00:17.516362 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg4dl" event={"ID":"e8ddcbb1-52ed-4029-baa4-03cd6f140d80","Type":"ContainerStarted","Data":"59e1c8e70858b056ef680cf6cccc47a026c7ab355729c986b12cb239f7dfdbca"} Jan 27 14:00:18 crc kubenswrapper[4914]: I0127 14:00:18.524077 4914 generic.go:334] "Generic (PLEG): container finished" podID="e8ddcbb1-52ed-4029-baa4-03cd6f140d80" containerID="59e1c8e70858b056ef680cf6cccc47a026c7ab355729c986b12cb239f7dfdbca" exitCode=0 Jan 27 14:00:18 crc kubenswrapper[4914]: I0127 14:00:18.524135 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg4dl" event={"ID":"e8ddcbb1-52ed-4029-baa4-03cd6f140d80","Type":"ContainerDied","Data":"59e1c8e70858b056ef680cf6cccc47a026c7ab355729c986b12cb239f7dfdbca"} Jan 27 14:00:20 crc kubenswrapper[4914]: I0127 14:00:20.536261 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg4dl" event={"ID":"e8ddcbb1-52ed-4029-baa4-03cd6f140d80","Type":"ContainerStarted","Data":"b3752a3fd21ed72bff1c90cd168a6add841e9b32d24318506f7897c75008ad2e"} Jan 27 14:00:20 crc kubenswrapper[4914]: I0127 14:00:20.537452 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd" event={"ID":"79390e5c-67e1-4a23-82c5-4c0bc346586b","Type":"ContainerStarted","Data":"4016de0c94305d8538a566feb8922064359d18f3679e7a50c5741f9cf750063b"} Jan 27 14:00:20 crc kubenswrapper[4914]: I0127 14:00:20.537582 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd" Jan 27 14:00:20 crc kubenswrapper[4914]: I0127 14:00:20.555756 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pg4dl" podStartSLOduration=2.949604341 podStartE2EDuration="6.555732112s" podCreationTimestamp="2026-01-27 14:00:14 +0000 UTC" firstStartedPulling="2026-01-27 14:00:16.504890238 +0000 UTC m=+974.817240323" lastFinishedPulling="2026-01-27 14:00:20.111018009 +0000 UTC m=+978.423368094" observedRunningTime="2026-01-27 14:00:20.550983473 +0000 UTC m=+978.863333558" watchObservedRunningTime="2026-01-27 14:00:20.555732112 +0000 UTC m=+978.868082187" Jan 27 14:00:20 crc kubenswrapper[4914]: I0127 14:00:20.572223 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd" podStartSLOduration=2.522885736 podStartE2EDuration="22.572202049s" podCreationTimestamp="2026-01-27 13:59:58 +0000 UTC" firstStartedPulling="2026-01-27 13:59:59.708537572 +0000 UTC m=+958.020887647" lastFinishedPulling="2026-01-27 14:00:19.757853875 +0000 UTC m=+978.070203960" observedRunningTime="2026-01-27 14:00:20.571357437 +0000 UTC m=+978.883707562" watchObservedRunningTime="2026-01-27 14:00:20.572202049 +0000 UTC m=+978.884552134" Jan 27 14:00:25 crc kubenswrapper[4914]: I0127 14:00:25.268623 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:25 crc kubenswrapper[4914]: I0127 14:00:25.269404 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:25 crc kubenswrapper[4914]: I0127 14:00:25.305241 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:25 crc kubenswrapper[4914]: I0127 14:00:25.648863 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:25 crc kubenswrapper[4914]: I0127 14:00:25.688042 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg4dl"] Jan 27 14:00:27 crc kubenswrapper[4914]: I0127 14:00:27.580204 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pg4dl" podUID="e8ddcbb1-52ed-4029-baa4-03cd6f140d80" containerName="registry-server" containerID="cri-o://b3752a3fd21ed72bff1c90cd168a6add841e9b32d24318506f7897c75008ad2e" gracePeriod=2 Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.012265 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.141014 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-utilities\") pod \"e8ddcbb1-52ed-4029-baa4-03cd6f140d80\" (UID: \"e8ddcbb1-52ed-4029-baa4-03cd6f140d80\") " Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.141120 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-catalog-content\") pod \"e8ddcbb1-52ed-4029-baa4-03cd6f140d80\" (UID: \"e8ddcbb1-52ed-4029-baa4-03cd6f140d80\") " Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.141201 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fzsd\" (UniqueName: \"kubernetes.io/projected/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-kube-api-access-9fzsd\") pod \"e8ddcbb1-52ed-4029-baa4-03cd6f140d80\" (UID: \"e8ddcbb1-52ed-4029-baa4-03cd6f140d80\") " Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.142331 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-utilities" (OuterVolumeSpecName: "utilities") pod "e8ddcbb1-52ed-4029-baa4-03cd6f140d80" (UID: "e8ddcbb1-52ed-4029-baa4-03cd6f140d80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.145261 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.146476 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-kube-api-access-9fzsd" (OuterVolumeSpecName: "kube-api-access-9fzsd") pod "e8ddcbb1-52ed-4029-baa4-03cd6f140d80" (UID: "e8ddcbb1-52ed-4029-baa4-03cd6f140d80"). InnerVolumeSpecName "kube-api-access-9fzsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.217389 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8ddcbb1-52ed-4029-baa4-03cd6f140d80" (UID: "e8ddcbb1-52ed-4029-baa4-03cd6f140d80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.246602 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fzsd\" (UniqueName: \"kubernetes.io/projected/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-kube-api-access-9fzsd\") on node \"crc\" DevicePath \"\"" Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.246654 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ddcbb1-52ed-4029-baa4-03cd6f140d80-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.587615 4914 generic.go:334] "Generic (PLEG): container finished" podID="e8ddcbb1-52ed-4029-baa4-03cd6f140d80" containerID="b3752a3fd21ed72bff1c90cd168a6add841e9b32d24318506f7897c75008ad2e" exitCode=0 Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.587655 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg4dl" event={"ID":"e8ddcbb1-52ed-4029-baa4-03cd6f140d80","Type":"ContainerDied","Data":"b3752a3fd21ed72bff1c90cd168a6add841e9b32d24318506f7897c75008ad2e"} Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.587704 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg4dl" event={"ID":"e8ddcbb1-52ed-4029-baa4-03cd6f140d80","Type":"ContainerDied","Data":"ab6ecdb344ae0acbd1676d3c8150eb5fe80d7a44bc9ae93a95eb6e6d34115443"} Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.587726 4914 scope.go:117] "RemoveContainer" containerID="b3752a3fd21ed72bff1c90cd168a6add841e9b32d24318506f7897c75008ad2e" Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.587733 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg4dl" Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.608267 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg4dl"] Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.610675 4914 scope.go:117] "RemoveContainer" containerID="59e1c8e70858b056ef680cf6cccc47a026c7ab355729c986b12cb239f7dfdbca" Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.613338 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg4dl"] Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.626718 4914 scope.go:117] "RemoveContainer" containerID="407ecfba92e4abd805baba74a6a585870c80a8c92afbc3b3239060ad0786c928" Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.642995 4914 scope.go:117] "RemoveContainer" containerID="b3752a3fd21ed72bff1c90cd168a6add841e9b32d24318506f7897c75008ad2e" Jan 27 14:00:28 crc kubenswrapper[4914]: E0127 14:00:28.643426 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3752a3fd21ed72bff1c90cd168a6add841e9b32d24318506f7897c75008ad2e\": container with ID starting with b3752a3fd21ed72bff1c90cd168a6add841e9b32d24318506f7897c75008ad2e not found: ID does not exist" containerID="b3752a3fd21ed72bff1c90cd168a6add841e9b32d24318506f7897c75008ad2e" Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.643474 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3752a3fd21ed72bff1c90cd168a6add841e9b32d24318506f7897c75008ad2e"} err="failed to get container status \"b3752a3fd21ed72bff1c90cd168a6add841e9b32d24318506f7897c75008ad2e\": rpc error: code = NotFound desc = could not find container \"b3752a3fd21ed72bff1c90cd168a6add841e9b32d24318506f7897c75008ad2e\": container with ID starting with b3752a3fd21ed72bff1c90cd168a6add841e9b32d24318506f7897c75008ad2e not found: ID does not exist" Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.643510 4914 scope.go:117] "RemoveContainer" containerID="59e1c8e70858b056ef680cf6cccc47a026c7ab355729c986b12cb239f7dfdbca" Jan 27 14:00:28 crc kubenswrapper[4914]: E0127 14:00:28.643948 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59e1c8e70858b056ef680cf6cccc47a026c7ab355729c986b12cb239f7dfdbca\": container with ID starting with 59e1c8e70858b056ef680cf6cccc47a026c7ab355729c986b12cb239f7dfdbca not found: ID does not exist" containerID="59e1c8e70858b056ef680cf6cccc47a026c7ab355729c986b12cb239f7dfdbca" Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.643983 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59e1c8e70858b056ef680cf6cccc47a026c7ab355729c986b12cb239f7dfdbca"} err="failed to get container status \"59e1c8e70858b056ef680cf6cccc47a026c7ab355729c986b12cb239f7dfdbca\": rpc error: code = NotFound desc = could not find container \"59e1c8e70858b056ef680cf6cccc47a026c7ab355729c986b12cb239f7dfdbca\": container with ID starting with 59e1c8e70858b056ef680cf6cccc47a026c7ab355729c986b12cb239f7dfdbca not found: ID does not exist" Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.644007 4914 scope.go:117] "RemoveContainer" containerID="407ecfba92e4abd805baba74a6a585870c80a8c92afbc3b3239060ad0786c928" Jan 27 14:00:28 crc kubenswrapper[4914]: E0127 14:00:28.644263 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"407ecfba92e4abd805baba74a6a585870c80a8c92afbc3b3239060ad0786c928\": container with ID starting with 407ecfba92e4abd805baba74a6a585870c80a8c92afbc3b3239060ad0786c928 not found: ID does not exist" containerID="407ecfba92e4abd805baba74a6a585870c80a8c92afbc3b3239060ad0786c928" Jan 27 14:00:28 crc kubenswrapper[4914]: I0127 14:00:28.644286 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"407ecfba92e4abd805baba74a6a585870c80a8c92afbc3b3239060ad0786c928"} err="failed to get container status \"407ecfba92e4abd805baba74a6a585870c80a8c92afbc3b3239060ad0786c928\": rpc error: code = NotFound desc = could not find container \"407ecfba92e4abd805baba74a6a585870c80a8c92afbc3b3239060ad0786c928\": container with ID starting with 407ecfba92e4abd805baba74a6a585870c80a8c92afbc3b3239060ad0786c928 not found: ID does not exist" Jan 27 14:00:29 crc kubenswrapper[4914]: I0127 14:00:29.261386 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6444797f4c-hnlrd" Jan 27 14:00:30 crc kubenswrapper[4914]: I0127 14:00:30.300824 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8ddcbb1-52ed-4029-baa4-03cd6f140d80" path="/var/lib/kubelet/pods/e8ddcbb1-52ed-4029-baa4-03cd6f140d80/volumes" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.069864 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fbv4k"] Jan 27 14:00:38 crc kubenswrapper[4914]: E0127 14:00:38.070741 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ddcbb1-52ed-4029-baa4-03cd6f140d80" containerName="extract-utilities" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.070761 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ddcbb1-52ed-4029-baa4-03cd6f140d80" containerName="extract-utilities" Jan 27 14:00:38 crc kubenswrapper[4914]: E0127 14:00:38.070784 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ddcbb1-52ed-4029-baa4-03cd6f140d80" containerName="registry-server" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.070794 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ddcbb1-52ed-4029-baa4-03cd6f140d80" containerName="registry-server" Jan 27 14:00:38 crc kubenswrapper[4914]: E0127 14:00:38.070818 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ddcbb1-52ed-4029-baa4-03cd6f140d80" containerName="extract-content" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.070860 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ddcbb1-52ed-4029-baa4-03cd6f140d80" containerName="extract-content" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.071029 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8ddcbb1-52ed-4029-baa4-03cd6f140d80" containerName="registry-server" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.072276 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.086612 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fbv4k"] Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.170004 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-catalog-content\") pod \"certified-operators-fbv4k\" (UID: \"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a\") " pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.170069 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plxbk\" (UniqueName: \"kubernetes.io/projected/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-kube-api-access-plxbk\") pod \"certified-operators-fbv4k\" (UID: \"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a\") " pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.170102 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-utilities\") pod \"certified-operators-fbv4k\" (UID: \"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a\") " pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.271225 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-catalog-content\") pod \"certified-operators-fbv4k\" (UID: \"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a\") " pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.271297 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plxbk\" (UniqueName: \"kubernetes.io/projected/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-kube-api-access-plxbk\") pod \"certified-operators-fbv4k\" (UID: \"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a\") " pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.271344 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-utilities\") pod \"certified-operators-fbv4k\" (UID: \"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a\") " pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.271811 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-catalog-content\") pod \"certified-operators-fbv4k\" (UID: \"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a\") " pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.271904 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-utilities\") pod \"certified-operators-fbv4k\" (UID: \"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a\") " pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.296607 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plxbk\" (UniqueName: \"kubernetes.io/projected/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-kube-api-access-plxbk\") pod \"certified-operators-fbv4k\" (UID: \"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a\") " pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.394105 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:38 crc kubenswrapper[4914]: I0127 14:00:38.642572 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fbv4k"] Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.017986 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6fb95778d6-wt7m7" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.662541 4914 generic.go:334] "Generic (PLEG): container finished" podID="843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a" containerID="1fca804eb0a2bd95b5e206da254433a27d831b26adeb8a853c90411555c714c9" exitCode=0 Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.662609 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbv4k" event={"ID":"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a","Type":"ContainerDied","Data":"1fca804eb0a2bd95b5e206da254433a27d831b26adeb8a853c90411555c714c9"} Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.662664 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbv4k" event={"ID":"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a","Type":"ContainerStarted","Data":"a27c3d5e5d681880f82336e5f54eab353f5b449c5c45116a9cca7396a94ec2d0"} Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.704491 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-v75q2"] Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.707271 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.709432 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.709607 4914 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.713158 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7"] Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.713953 4914 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-7bx94" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.715254 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.717174 4914 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.750487 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7"] Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.795594 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-gxlp7"] Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.796712 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gxlp7" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.798323 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ed6bd514-9580-4226-927d-9bb52c0a6d76-frr-sockets\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.798364 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f61fcde0-6647-4031-a3fa-4de22ba93d52-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-gxxf7\" (UID: \"f61fcde0-6647-4031-a3fa-4de22ba93d52\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.798407 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmskh\" (UniqueName: \"kubernetes.io/projected/ed6bd514-9580-4226-927d-9bb52c0a6d76-kube-api-access-nmskh\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.798432 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed6bd514-9580-4226-927d-9bb52c0a6d76-metrics-certs\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.798466 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmwr2\" (UniqueName: \"kubernetes.io/projected/f61fcde0-6647-4031-a3fa-4de22ba93d52-kube-api-access-xmwr2\") pod \"frr-k8s-webhook-server-7df86c4f6c-gxxf7\" (UID: \"f61fcde0-6647-4031-a3fa-4de22ba93d52\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.798491 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ed6bd514-9580-4226-927d-9bb52c0a6d76-reloader\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.798534 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ed6bd514-9580-4226-927d-9bb52c0a6d76-metrics\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.798554 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ed6bd514-9580-4226-927d-9bb52c0a6d76-frr-startup\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.798590 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ed6bd514-9580-4226-927d-9bb52c0a6d76-frr-conf\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.800257 4914 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.800456 4914 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.800572 4914 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-rl8gl" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.800975 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.831065 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-c2g6z"] Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.831940 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-c2g6z" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.837137 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-c2g6z"] Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.840031 4914 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.899575 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ed6bd514-9580-4226-927d-9bb52c0a6d76-frr-conf\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.899618 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ed6bd514-9580-4226-927d-9bb52c0a6d76-frr-sockets\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.899639 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f61fcde0-6647-4031-a3fa-4de22ba93d52-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-gxxf7\" (UID: \"f61fcde0-6647-4031-a3fa-4de22ba93d52\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.899666 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bafcf3de-a99a-4d2a-8ed0-55411eea67d0-cert\") pod \"controller-6968d8fdc4-c2g6z\" (UID: \"bafcf3de-a99a-4d2a-8ed0-55411eea67d0\") " pod="metallb-system/controller-6968d8fdc4-c2g6z" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.899684 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0452f298-73ba-4192-9aba-307771710712-metrics-certs\") pod \"speaker-gxlp7\" (UID: \"0452f298-73ba-4192-9aba-307771710712\") " pod="metallb-system/speaker-gxlp7" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.899706 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmskh\" (UniqueName: \"kubernetes.io/projected/ed6bd514-9580-4226-927d-9bb52c0a6d76-kube-api-access-nmskh\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.899724 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8jzx\" (UniqueName: \"kubernetes.io/projected/bafcf3de-a99a-4d2a-8ed0-55411eea67d0-kube-api-access-l8jzx\") pod \"controller-6968d8fdc4-c2g6z\" (UID: \"bafcf3de-a99a-4d2a-8ed0-55411eea67d0\") " pod="metallb-system/controller-6968d8fdc4-c2g6z" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.899739 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed6bd514-9580-4226-927d-9bb52c0a6d76-metrics-certs\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.899765 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmwr2\" (UniqueName: \"kubernetes.io/projected/f61fcde0-6647-4031-a3fa-4de22ba93d52-kube-api-access-xmwr2\") pod \"frr-k8s-webhook-server-7df86c4f6c-gxxf7\" (UID: \"f61fcde0-6647-4031-a3fa-4de22ba93d52\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.899782 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0452f298-73ba-4192-9aba-307771710712-memberlist\") pod \"speaker-gxlp7\" (UID: \"0452f298-73ba-4192-9aba-307771710712\") " pod="metallb-system/speaker-gxlp7" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.899797 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ed6bd514-9580-4226-927d-9bb52c0a6d76-reloader\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.899823 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bafcf3de-a99a-4d2a-8ed0-55411eea67d0-metrics-certs\") pod \"controller-6968d8fdc4-c2g6z\" (UID: \"bafcf3de-a99a-4d2a-8ed0-55411eea67d0\") " pod="metallb-system/controller-6968d8fdc4-c2g6z" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.899855 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0452f298-73ba-4192-9aba-307771710712-metallb-excludel2\") pod \"speaker-gxlp7\" (UID: \"0452f298-73ba-4192-9aba-307771710712\") " pod="metallb-system/speaker-gxlp7" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.899873 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ed6bd514-9580-4226-927d-9bb52c0a6d76-metrics\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.899888 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ed6bd514-9580-4226-927d-9bb52c0a6d76-frr-startup\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.899907 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8l6h\" (UniqueName: \"kubernetes.io/projected/0452f298-73ba-4192-9aba-307771710712-kube-api-access-x8l6h\") pod \"speaker-gxlp7\" (UID: \"0452f298-73ba-4192-9aba-307771710712\") " pod="metallb-system/speaker-gxlp7" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.900269 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ed6bd514-9580-4226-927d-9bb52c0a6d76-frr-conf\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.900446 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ed6bd514-9580-4226-927d-9bb52c0a6d76-frr-sockets\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: E0127 14:00:39.900512 4914 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 27 14:00:39 crc kubenswrapper[4914]: E0127 14:00:39.900552 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f61fcde0-6647-4031-a3fa-4de22ba93d52-cert podName:f61fcde0-6647-4031-a3fa-4de22ba93d52 nodeName:}" failed. No retries permitted until 2026-01-27 14:00:40.400536525 +0000 UTC m=+998.712886610 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f61fcde0-6647-4031-a3fa-4de22ba93d52-cert") pod "frr-k8s-webhook-server-7df86c4f6c-gxxf7" (UID: "f61fcde0-6647-4031-a3fa-4de22ba93d52") : secret "frr-k8s-webhook-server-cert" not found Jan 27 14:00:39 crc kubenswrapper[4914]: E0127 14:00:39.901253 4914 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 27 14:00:39 crc kubenswrapper[4914]: E0127 14:00:39.901347 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed6bd514-9580-4226-927d-9bb52c0a6d76-metrics-certs podName:ed6bd514-9580-4226-927d-9bb52c0a6d76 nodeName:}" failed. No retries permitted until 2026-01-27 14:00:40.401324788 +0000 UTC m=+998.713674953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed6bd514-9580-4226-927d-9bb52c0a6d76-metrics-certs") pod "frr-k8s-v75q2" (UID: "ed6bd514-9580-4226-927d-9bb52c0a6d76") : secret "frr-k8s-certs-secret" not found Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.901717 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ed6bd514-9580-4226-927d-9bb52c0a6d76-metrics\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.901689 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ed6bd514-9580-4226-927d-9bb52c0a6d76-reloader\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.903885 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ed6bd514-9580-4226-927d-9bb52c0a6d76-frr-startup\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.939513 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmwr2\" (UniqueName: \"kubernetes.io/projected/f61fcde0-6647-4031-a3fa-4de22ba93d52-kube-api-access-xmwr2\") pod \"frr-k8s-webhook-server-7df86c4f6c-gxxf7\" (UID: \"f61fcde0-6647-4031-a3fa-4de22ba93d52\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7" Jan 27 14:00:39 crc kubenswrapper[4914]: I0127 14:00:39.943429 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmskh\" (UniqueName: \"kubernetes.io/projected/ed6bd514-9580-4226-927d-9bb52c0a6d76-kube-api-access-nmskh\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.001344 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0452f298-73ba-4192-9aba-307771710712-memberlist\") pod \"speaker-gxlp7\" (UID: \"0452f298-73ba-4192-9aba-307771710712\") " pod="metallb-system/speaker-gxlp7" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.001401 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bafcf3de-a99a-4d2a-8ed0-55411eea67d0-metrics-certs\") pod \"controller-6968d8fdc4-c2g6z\" (UID: \"bafcf3de-a99a-4d2a-8ed0-55411eea67d0\") " pod="metallb-system/controller-6968d8fdc4-c2g6z" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.001427 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0452f298-73ba-4192-9aba-307771710712-metallb-excludel2\") pod \"speaker-gxlp7\" (UID: \"0452f298-73ba-4192-9aba-307771710712\") " pod="metallb-system/speaker-gxlp7" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.001453 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8l6h\" (UniqueName: \"kubernetes.io/projected/0452f298-73ba-4192-9aba-307771710712-kube-api-access-x8l6h\") pod \"speaker-gxlp7\" (UID: \"0452f298-73ba-4192-9aba-307771710712\") " pod="metallb-system/speaker-gxlp7" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.001493 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0452f298-73ba-4192-9aba-307771710712-metrics-certs\") pod \"speaker-gxlp7\" (UID: \"0452f298-73ba-4192-9aba-307771710712\") " pod="metallb-system/speaker-gxlp7" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.001507 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bafcf3de-a99a-4d2a-8ed0-55411eea67d0-cert\") pod \"controller-6968d8fdc4-c2g6z\" (UID: \"bafcf3de-a99a-4d2a-8ed0-55411eea67d0\") " pod="metallb-system/controller-6968d8fdc4-c2g6z" Jan 27 14:00:40 crc kubenswrapper[4914]: E0127 14:00:40.001530 4914 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 14:00:40 crc kubenswrapper[4914]: E0127 14:00:40.001591 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0452f298-73ba-4192-9aba-307771710712-memberlist podName:0452f298-73ba-4192-9aba-307771710712 nodeName:}" failed. No retries permitted until 2026-01-27 14:00:40.501573681 +0000 UTC m=+998.813923776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0452f298-73ba-4192-9aba-307771710712-memberlist") pod "speaker-gxlp7" (UID: "0452f298-73ba-4192-9aba-307771710712") : secret "metallb-memberlist" not found Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.001531 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8jzx\" (UniqueName: \"kubernetes.io/projected/bafcf3de-a99a-4d2a-8ed0-55411eea67d0-kube-api-access-l8jzx\") pod \"controller-6968d8fdc4-c2g6z\" (UID: \"bafcf3de-a99a-4d2a-8ed0-55411eea67d0\") " pod="metallb-system/controller-6968d8fdc4-c2g6z" Jan 27 14:00:40 crc kubenswrapper[4914]: E0127 14:00:40.001854 4914 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 27 14:00:40 crc kubenswrapper[4914]: E0127 14:00:40.001880 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0452f298-73ba-4192-9aba-307771710712-metrics-certs podName:0452f298-73ba-4192-9aba-307771710712 nodeName:}" failed. No retries permitted until 2026-01-27 14:00:40.501872069 +0000 UTC m=+998.814222164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0452f298-73ba-4192-9aba-307771710712-metrics-certs") pod "speaker-gxlp7" (UID: "0452f298-73ba-4192-9aba-307771710712") : secret "speaker-certs-secret" not found Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.003103 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0452f298-73ba-4192-9aba-307771710712-metallb-excludel2\") pod \"speaker-gxlp7\" (UID: \"0452f298-73ba-4192-9aba-307771710712\") " pod="metallb-system/speaker-gxlp7" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.006256 4914 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.007186 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bafcf3de-a99a-4d2a-8ed0-55411eea67d0-metrics-certs\") pod \"controller-6968d8fdc4-c2g6z\" (UID: \"bafcf3de-a99a-4d2a-8ed0-55411eea67d0\") " pod="metallb-system/controller-6968d8fdc4-c2g6z" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.015577 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bafcf3de-a99a-4d2a-8ed0-55411eea67d0-cert\") pod \"controller-6968d8fdc4-c2g6z\" (UID: \"bafcf3de-a99a-4d2a-8ed0-55411eea67d0\") " pod="metallb-system/controller-6968d8fdc4-c2g6z" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.021888 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8l6h\" (UniqueName: \"kubernetes.io/projected/0452f298-73ba-4192-9aba-307771710712-kube-api-access-x8l6h\") pod \"speaker-gxlp7\" (UID: \"0452f298-73ba-4192-9aba-307771710712\") " pod="metallb-system/speaker-gxlp7" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.022529 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8jzx\" (UniqueName: \"kubernetes.io/projected/bafcf3de-a99a-4d2a-8ed0-55411eea67d0-kube-api-access-l8jzx\") pod \"controller-6968d8fdc4-c2g6z\" (UID: \"bafcf3de-a99a-4d2a-8ed0-55411eea67d0\") " pod="metallb-system/controller-6968d8fdc4-c2g6z" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.155577 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-c2g6z" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.407613 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f61fcde0-6647-4031-a3fa-4de22ba93d52-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-gxxf7\" (UID: \"f61fcde0-6647-4031-a3fa-4de22ba93d52\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.408063 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed6bd514-9580-4226-927d-9bb52c0a6d76-metrics-certs\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.412374 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f61fcde0-6647-4031-a3fa-4de22ba93d52-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-gxxf7\" (UID: \"f61fcde0-6647-4031-a3fa-4de22ba93d52\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.412489 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed6bd514-9580-4226-927d-9bb52c0a6d76-metrics-certs\") pod \"frr-k8s-v75q2\" (UID: \"ed6bd514-9580-4226-927d-9bb52c0a6d76\") " pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.509207 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0452f298-73ba-4192-9aba-307771710712-metrics-certs\") pod \"speaker-gxlp7\" (UID: \"0452f298-73ba-4192-9aba-307771710712\") " pod="metallb-system/speaker-gxlp7" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.509280 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0452f298-73ba-4192-9aba-307771710712-memberlist\") pod \"speaker-gxlp7\" (UID: \"0452f298-73ba-4192-9aba-307771710712\") " pod="metallb-system/speaker-gxlp7" Jan 27 14:00:40 crc kubenswrapper[4914]: E0127 14:00:40.509411 4914 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 14:00:40 crc kubenswrapper[4914]: E0127 14:00:40.509469 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0452f298-73ba-4192-9aba-307771710712-memberlist podName:0452f298-73ba-4192-9aba-307771710712 nodeName:}" failed. No retries permitted until 2026-01-27 14:00:41.509454589 +0000 UTC m=+999.821804674 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0452f298-73ba-4192-9aba-307771710712-memberlist") pod "speaker-gxlp7" (UID: "0452f298-73ba-4192-9aba-307771710712") : secret "metallb-memberlist" not found Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.512645 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0452f298-73ba-4192-9aba-307771710712-metrics-certs\") pod \"speaker-gxlp7\" (UID: \"0452f298-73ba-4192-9aba-307771710712\") " pod="metallb-system/speaker-gxlp7" Jan 27 14:00:40 crc kubenswrapper[4914]: W0127 14:00:40.577214 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbafcf3de_a99a_4d2a_8ed0_55411eea67d0.slice/crio-0cd679164cae225c82ea94bf16052cde14a9177678bfaaa19bbc50bc8f35922b WatchSource:0}: Error finding container 0cd679164cae225c82ea94bf16052cde14a9177678bfaaa19bbc50bc8f35922b: Status 404 returned error can't find the container with id 0cd679164cae225c82ea94bf16052cde14a9177678bfaaa19bbc50bc8f35922b Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.577746 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-c2g6z"] Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.643280 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.654047 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7" Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.668607 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-c2g6z" event={"ID":"bafcf3de-a99a-4d2a-8ed0-55411eea67d0","Type":"ContainerStarted","Data":"0cd679164cae225c82ea94bf16052cde14a9177678bfaaa19bbc50bc8f35922b"} Jan 27 14:00:40 crc kubenswrapper[4914]: I0127 14:00:40.670976 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbv4k" event={"ID":"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a","Type":"ContainerStarted","Data":"700587a49ae7b7cf7287f6581d19f8e90aec29005493fe1b651955f803b63022"} Jan 27 14:00:41 crc kubenswrapper[4914]: I0127 14:00:41.159485 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7"] Jan 27 14:00:41 crc kubenswrapper[4914]: W0127 14:00:41.165921 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf61fcde0_6647_4031_a3fa_4de22ba93d52.slice/crio-eb82cb0cbf073163843b3156278967ae1bdbd3e462d564643557b5b157cb2eb1 WatchSource:0}: Error finding container eb82cb0cbf073163843b3156278967ae1bdbd3e462d564643557b5b157cb2eb1: Status 404 returned error can't find the container with id eb82cb0cbf073163843b3156278967ae1bdbd3e462d564643557b5b157cb2eb1 Jan 27 14:00:41 crc kubenswrapper[4914]: I0127 14:00:41.535414 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0452f298-73ba-4192-9aba-307771710712-memberlist\") pod \"speaker-gxlp7\" (UID: \"0452f298-73ba-4192-9aba-307771710712\") " pod="metallb-system/speaker-gxlp7" Jan 27 14:00:41 crc kubenswrapper[4914]: I0127 14:00:41.542418 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0452f298-73ba-4192-9aba-307771710712-memberlist\") pod \"speaker-gxlp7\" (UID: \"0452f298-73ba-4192-9aba-307771710712\") " pod="metallb-system/speaker-gxlp7" Jan 27 14:00:41 crc kubenswrapper[4914]: I0127 14:00:41.615660 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gxlp7" Jan 27 14:00:41 crc kubenswrapper[4914]: W0127 14:00:41.638686 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0452f298_73ba_4192_9aba_307771710712.slice/crio-e00d776b973627dd54a0ac3a4f8bb105989e7bd483a1e88875532b6018dc4926 WatchSource:0}: Error finding container e00d776b973627dd54a0ac3a4f8bb105989e7bd483a1e88875532b6018dc4926: Status 404 returned error can't find the container with id e00d776b973627dd54a0ac3a4f8bb105989e7bd483a1e88875532b6018dc4926 Jan 27 14:00:41 crc kubenswrapper[4914]: I0127 14:00:41.680075 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-c2g6z" event={"ID":"bafcf3de-a99a-4d2a-8ed0-55411eea67d0","Type":"ContainerStarted","Data":"1c91f952777224a2bbd7590438c9aaf6411feb4a9410f3e93b1544cef184b79a"} Jan 27 14:00:41 crc kubenswrapper[4914]: I0127 14:00:41.680174 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-c2g6z" Jan 27 14:00:41 crc kubenswrapper[4914]: I0127 14:00:41.680195 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-c2g6z" event={"ID":"bafcf3de-a99a-4d2a-8ed0-55411eea67d0","Type":"ContainerStarted","Data":"e8a2af6235a0335fb71ecc0ac4a0a2e8a656385ee753d437f2634600de674790"} Jan 27 14:00:41 crc kubenswrapper[4914]: I0127 14:00:41.682303 4914 generic.go:334] "Generic (PLEG): container finished" podID="843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a" containerID="700587a49ae7b7cf7287f6581d19f8e90aec29005493fe1b651955f803b63022" exitCode=0 Jan 27 14:00:41 crc kubenswrapper[4914]: I0127 14:00:41.682414 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbv4k" event={"ID":"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a","Type":"ContainerDied","Data":"700587a49ae7b7cf7287f6581d19f8e90aec29005493fe1b651955f803b63022"} Jan 27 14:00:41 crc kubenswrapper[4914]: I0127 14:00:41.705187 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gxlp7" event={"ID":"0452f298-73ba-4192-9aba-307771710712","Type":"ContainerStarted","Data":"e00d776b973627dd54a0ac3a4f8bb105989e7bd483a1e88875532b6018dc4926"} Jan 27 14:00:41 crc kubenswrapper[4914]: I0127 14:00:41.711648 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-c2g6z" podStartSLOduration=2.71162518 podStartE2EDuration="2.71162518s" podCreationTimestamp="2026-01-27 14:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:00:41.710789008 +0000 UTC m=+1000.023139103" watchObservedRunningTime="2026-01-27 14:00:41.71162518 +0000 UTC m=+1000.023975265" Jan 27 14:00:41 crc kubenswrapper[4914]: I0127 14:00:41.734363 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v75q2" event={"ID":"ed6bd514-9580-4226-927d-9bb52c0a6d76","Type":"ContainerStarted","Data":"16063f7c7b50ae6fdb1e472ca2acbdb35b7ffa03869b6b8b6ad51fc4393ed67c"} Jan 27 14:00:41 crc kubenswrapper[4914]: I0127 14:00:41.742957 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7" event={"ID":"f61fcde0-6647-4031-a3fa-4de22ba93d52","Type":"ContainerStarted","Data":"eb82cb0cbf073163843b3156278967ae1bdbd3e462d564643557b5b157cb2eb1"} Jan 27 14:00:42 crc kubenswrapper[4914]: I0127 14:00:42.762233 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbv4k" event={"ID":"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a","Type":"ContainerStarted","Data":"c6393b32eb17cb34b56d52ac7500d6075280aa3589e7563f99ee5d1e2383099e"} Jan 27 14:00:42 crc kubenswrapper[4914]: I0127 14:00:42.765061 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gxlp7" event={"ID":"0452f298-73ba-4192-9aba-307771710712","Type":"ContainerStarted","Data":"50e7c89a1d3916af921f70358de9948ac18c7cb6a2e0b2e16a72059d9441470d"} Jan 27 14:00:42 crc kubenswrapper[4914]: I0127 14:00:42.765095 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-gxlp7" Jan 27 14:00:42 crc kubenswrapper[4914]: I0127 14:00:42.765105 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gxlp7" event={"ID":"0452f298-73ba-4192-9aba-307771710712","Type":"ContainerStarted","Data":"9c3899b4147d508cd0e503522fd56f2058ce096497f05d30121c5fe7ee16c5b9"} Jan 27 14:00:42 crc kubenswrapper[4914]: I0127 14:00:42.786289 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fbv4k" podStartSLOduration=2.12005121 podStartE2EDuration="4.786271767s" podCreationTimestamp="2026-01-27 14:00:38 +0000 UTC" firstStartedPulling="2026-01-27 14:00:39.667046802 +0000 UTC m=+997.979396897" lastFinishedPulling="2026-01-27 14:00:42.333267369 +0000 UTC m=+1000.645617454" observedRunningTime="2026-01-27 14:00:42.784460837 +0000 UTC m=+1001.096810922" watchObservedRunningTime="2026-01-27 14:00:42.786271767 +0000 UTC m=+1001.098621852" Jan 27 14:00:42 crc kubenswrapper[4914]: I0127 14:00:42.799218 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-gxlp7" podStartSLOduration=3.7991985169999998 podStartE2EDuration="3.799198517s" podCreationTimestamp="2026-01-27 14:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:00:42.799057573 +0000 UTC m=+1001.111407658" watchObservedRunningTime="2026-01-27 14:00:42.799198517 +0000 UTC m=+1001.111548602" Jan 27 14:00:48 crc kubenswrapper[4914]: I0127 14:00:48.394826 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:48 crc kubenswrapper[4914]: I0127 14:00:48.395205 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:48 crc kubenswrapper[4914]: I0127 14:00:48.447808 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:49 crc kubenswrapper[4914]: I0127 14:00:49.166240 4914 generic.go:334] "Generic (PLEG): container finished" podID="ed6bd514-9580-4226-927d-9bb52c0a6d76" containerID="ba8584686aa0de7f4374b10a01e302a4cfe95cf9cc6ade2534006d7ad0dd068a" exitCode=0 Jan 27 14:00:49 crc kubenswrapper[4914]: I0127 14:00:49.166439 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v75q2" event={"ID":"ed6bd514-9580-4226-927d-9bb52c0a6d76","Type":"ContainerDied","Data":"ba8584686aa0de7f4374b10a01e302a4cfe95cf9cc6ade2534006d7ad0dd068a"} Jan 27 14:00:49 crc kubenswrapper[4914]: I0127 14:00:49.170318 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7" event={"ID":"f61fcde0-6647-4031-a3fa-4de22ba93d52","Type":"ContainerStarted","Data":"1ec8b787367b8b23d6a4837f25c22b725c3bda3d6e40246a495b3bfe271ebae2"} Jan 27 14:00:49 crc kubenswrapper[4914]: I0127 14:00:49.170439 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7" Jan 27 14:00:49 crc kubenswrapper[4914]: I0127 14:00:49.209106 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7" podStartSLOduration=2.49077852 podStartE2EDuration="10.209082252s" podCreationTimestamp="2026-01-27 14:00:39 +0000 UTC" firstStartedPulling="2026-01-27 14:00:41.168630178 +0000 UTC m=+999.480980263" lastFinishedPulling="2026-01-27 14:00:48.88693391 +0000 UTC m=+1007.199283995" observedRunningTime="2026-01-27 14:00:49.203149511 +0000 UTC m=+1007.515499606" watchObservedRunningTime="2026-01-27 14:00:49.209082252 +0000 UTC m=+1007.521432407" Jan 27 14:00:49 crc kubenswrapper[4914]: I0127 14:00:49.216125 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:49 crc kubenswrapper[4914]: I0127 14:00:49.298479 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fbv4k"] Jan 27 14:00:50 crc kubenswrapper[4914]: I0127 14:00:50.159952 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-c2g6z" Jan 27 14:00:50 crc kubenswrapper[4914]: I0127 14:00:50.178618 4914 generic.go:334] "Generic (PLEG): container finished" podID="ed6bd514-9580-4226-927d-9bb52c0a6d76" containerID="42f22a62b56bd4e8ea6b6bb93068f4861939895fb0fd9e57d038613661486743" exitCode=0 Jan 27 14:00:50 crc kubenswrapper[4914]: I0127 14:00:50.178693 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v75q2" event={"ID":"ed6bd514-9580-4226-927d-9bb52c0a6d76","Type":"ContainerDied","Data":"42f22a62b56bd4e8ea6b6bb93068f4861939895fb0fd9e57d038613661486743"} Jan 27 14:00:51 crc kubenswrapper[4914]: I0127 14:00:51.185981 4914 generic.go:334] "Generic (PLEG): container finished" podID="ed6bd514-9580-4226-927d-9bb52c0a6d76" containerID="08078c403ee0630cdd1d939526863269b27e98665a834aecf6aefcab4eece7aa" exitCode=0 Jan 27 14:00:51 crc kubenswrapper[4914]: I0127 14:00:51.187093 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v75q2" event={"ID":"ed6bd514-9580-4226-927d-9bb52c0a6d76","Type":"ContainerDied","Data":"08078c403ee0630cdd1d939526863269b27e98665a834aecf6aefcab4eece7aa"} Jan 27 14:00:51 crc kubenswrapper[4914]: I0127 14:00:51.187230 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fbv4k" podUID="843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a" containerName="registry-server" containerID="cri-o://c6393b32eb17cb34b56d52ac7500d6075280aa3589e7563f99ee5d1e2383099e" gracePeriod=2 Jan 27 14:00:51 crc kubenswrapper[4914]: I0127 14:00:51.618880 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-gxlp7" Jan 27 14:00:52 crc kubenswrapper[4914]: I0127 14:00:52.194329 4914 generic.go:334] "Generic (PLEG): container finished" podID="843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a" containerID="c6393b32eb17cb34b56d52ac7500d6075280aa3589e7563f99ee5d1e2383099e" exitCode=0 Jan 27 14:00:52 crc kubenswrapper[4914]: I0127 14:00:52.194395 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbv4k" event={"ID":"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a","Type":"ContainerDied","Data":"c6393b32eb17cb34b56d52ac7500d6075280aa3589e7563f99ee5d1e2383099e"} Jan 27 14:00:52 crc kubenswrapper[4914]: I0127 14:00:52.198075 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v75q2" event={"ID":"ed6bd514-9580-4226-927d-9bb52c0a6d76","Type":"ContainerStarted","Data":"e2715ed163a0c2363dc287fc233aa3b76827889a8a986590e7af04a763081937"} Jan 27 14:00:52 crc kubenswrapper[4914]: I0127 14:00:52.198110 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v75q2" event={"ID":"ed6bd514-9580-4226-927d-9bb52c0a6d76","Type":"ContainerStarted","Data":"587441e0788c417dec62e8c20c7e7d6ebcbb443a2c31636993cf21aff4a20d96"} Jan 27 14:00:52 crc kubenswrapper[4914]: I0127 14:00:52.198120 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v75q2" event={"ID":"ed6bd514-9580-4226-927d-9bb52c0a6d76","Type":"ContainerStarted","Data":"943660e9bc3b8fc601aeba8a242bcdbb6b257b21acf7dcd4562819e82f0c7aea"} Jan 27 14:00:52 crc kubenswrapper[4914]: I0127 14:00:52.708658 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:52 crc kubenswrapper[4914]: I0127 14:00:52.841722 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-utilities\") pod \"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a\" (UID: \"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a\") " Jan 27 14:00:52 crc kubenswrapper[4914]: I0127 14:00:52.842132 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-catalog-content\") pod \"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a\" (UID: \"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a\") " Jan 27 14:00:52 crc kubenswrapper[4914]: I0127 14:00:52.842295 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plxbk\" (UniqueName: \"kubernetes.io/projected/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-kube-api-access-plxbk\") pod \"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a\" (UID: \"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a\") " Jan 27 14:00:52 crc kubenswrapper[4914]: I0127 14:00:52.843518 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-utilities" (OuterVolumeSpecName: "utilities") pod "843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a" (UID: "843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:00:52 crc kubenswrapper[4914]: I0127 14:00:52.850007 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-kube-api-access-plxbk" (OuterVolumeSpecName: "kube-api-access-plxbk") pod "843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a" (UID: "843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a"). InnerVolumeSpecName "kube-api-access-plxbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:00:52 crc kubenswrapper[4914]: I0127 14:00:52.887145 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a" (UID: "843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:00:52 crc kubenswrapper[4914]: I0127 14:00:52.944001 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:00:52 crc kubenswrapper[4914]: I0127 14:00:52.944038 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plxbk\" (UniqueName: \"kubernetes.io/projected/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-kube-api-access-plxbk\") on node \"crc\" DevicePath \"\"" Jan 27 14:00:52 crc kubenswrapper[4914]: I0127 14:00:52.944053 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:00:53 crc kubenswrapper[4914]: I0127 14:00:53.208275 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fbv4k" Jan 27 14:00:53 crc kubenswrapper[4914]: I0127 14:00:53.208282 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbv4k" event={"ID":"843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a","Type":"ContainerDied","Data":"a27c3d5e5d681880f82336e5f54eab353f5b449c5c45116a9cca7396a94ec2d0"} Jan 27 14:00:53 crc kubenswrapper[4914]: I0127 14:00:53.209024 4914 scope.go:117] "RemoveContainer" containerID="c6393b32eb17cb34b56d52ac7500d6075280aa3589e7563f99ee5d1e2383099e" Jan 27 14:00:53 crc kubenswrapper[4914]: I0127 14:00:53.213660 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v75q2" event={"ID":"ed6bd514-9580-4226-927d-9bb52c0a6d76","Type":"ContainerStarted","Data":"71c2763b038a2456f41619c79f8aca1c9395c6c1f3f140dff1ff7de84d8fcddb"} Jan 27 14:00:53 crc kubenswrapper[4914]: I0127 14:00:53.213706 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v75q2" event={"ID":"ed6bd514-9580-4226-927d-9bb52c0a6d76","Type":"ContainerStarted","Data":"216ab2307072d7bc3affeea2798cd7fcea53d57f3ddfc8a84f6f6ec5fb654f33"} Jan 27 14:00:53 crc kubenswrapper[4914]: I0127 14:00:53.213721 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v75q2" event={"ID":"ed6bd514-9580-4226-927d-9bb52c0a6d76","Type":"ContainerStarted","Data":"a4632ecd671b3b9b49360b3ee23dea9a01ad987b3cee9575649ffd213364fad3"} Jan 27 14:00:53 crc kubenswrapper[4914]: I0127 14:00:53.213987 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:53 crc kubenswrapper[4914]: I0127 14:00:53.237577 4914 scope.go:117] "RemoveContainer" containerID="700587a49ae7b7cf7287f6581d19f8e90aec29005493fe1b651955f803b63022" Jan 27 14:00:53 crc kubenswrapper[4914]: I0127 14:00:53.246879 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-v75q2" podStartSLOduration=6.118756456 podStartE2EDuration="14.246862882s" podCreationTimestamp="2026-01-27 14:00:39 +0000 UTC" firstStartedPulling="2026-01-27 14:00:40.763638175 +0000 UTC m=+999.075988260" lastFinishedPulling="2026-01-27 14:00:48.891744601 +0000 UTC m=+1007.204094686" observedRunningTime="2026-01-27 14:00:53.243406407 +0000 UTC m=+1011.555756542" watchObservedRunningTime="2026-01-27 14:00:53.246862882 +0000 UTC m=+1011.559212967" Jan 27 14:00:53 crc kubenswrapper[4914]: I0127 14:00:53.262586 4914 scope.go:117] "RemoveContainer" containerID="1fca804eb0a2bd95b5e206da254433a27d831b26adeb8a853c90411555c714c9" Jan 27 14:00:53 crc kubenswrapper[4914]: I0127 14:00:53.267420 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fbv4k"] Jan 27 14:00:53 crc kubenswrapper[4914]: I0127 14:00:53.274697 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fbv4k"] Jan 27 14:00:54 crc kubenswrapper[4914]: I0127 14:00:54.302133 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a" path="/var/lib/kubelet/pods/843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a/volumes" Jan 27 14:00:54 crc kubenswrapper[4914]: I0127 14:00:54.761227 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5xdrv"] Jan 27 14:00:54 crc kubenswrapper[4914]: E0127 14:00:54.761530 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a" containerName="extract-utilities" Jan 27 14:00:54 crc kubenswrapper[4914]: I0127 14:00:54.761545 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a" containerName="extract-utilities" Jan 27 14:00:54 crc kubenswrapper[4914]: E0127 14:00:54.761560 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a" containerName="extract-content" Jan 27 14:00:54 crc kubenswrapper[4914]: I0127 14:00:54.761570 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a" containerName="extract-content" Jan 27 14:00:54 crc kubenswrapper[4914]: E0127 14:00:54.761596 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a" containerName="registry-server" Jan 27 14:00:54 crc kubenswrapper[4914]: I0127 14:00:54.761603 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a" containerName="registry-server" Jan 27 14:00:54 crc kubenswrapper[4914]: I0127 14:00:54.762043 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="843ccf3d-e0f6-45ea-9cfa-f98adf98ac2a" containerName="registry-server" Jan 27 14:00:54 crc kubenswrapper[4914]: I0127 14:00:54.762573 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5xdrv" Jan 27 14:00:54 crc kubenswrapper[4914]: I0127 14:00:54.765677 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 14:00:54 crc kubenswrapper[4914]: I0127 14:00:54.766023 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-nfcsn" Jan 27 14:00:54 crc kubenswrapper[4914]: I0127 14:00:54.766504 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 14:00:54 crc kubenswrapper[4914]: I0127 14:00:54.768288 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmt9s\" (UniqueName: \"kubernetes.io/projected/359f8636-db16-4be4-a671-e9d3fd1e8175-kube-api-access-zmt9s\") pod \"openstack-operator-index-5xdrv\" (UID: \"359f8636-db16-4be4-a671-e9d3fd1e8175\") " pod="openstack-operators/openstack-operator-index-5xdrv" Jan 27 14:00:54 crc kubenswrapper[4914]: I0127 14:00:54.778501 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5xdrv"] Jan 27 14:00:54 crc kubenswrapper[4914]: I0127 14:00:54.869185 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmt9s\" (UniqueName: \"kubernetes.io/projected/359f8636-db16-4be4-a671-e9d3fd1e8175-kube-api-access-zmt9s\") pod \"openstack-operator-index-5xdrv\" (UID: \"359f8636-db16-4be4-a671-e9d3fd1e8175\") " pod="openstack-operators/openstack-operator-index-5xdrv" Jan 27 14:00:54 crc kubenswrapper[4914]: I0127 14:00:54.894583 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmt9s\" (UniqueName: \"kubernetes.io/projected/359f8636-db16-4be4-a671-e9d3fd1e8175-kube-api-access-zmt9s\") pod \"openstack-operator-index-5xdrv\" (UID: \"359f8636-db16-4be4-a671-e9d3fd1e8175\") " pod="openstack-operators/openstack-operator-index-5xdrv" Jan 27 14:00:55 crc kubenswrapper[4914]: I0127 14:00:55.077186 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5xdrv" Jan 27 14:00:55 crc kubenswrapper[4914]: I0127 14:00:55.288282 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5xdrv"] Jan 27 14:00:55 crc kubenswrapper[4914]: I0127 14:00:55.643820 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:55 crc kubenswrapper[4914]: I0127 14:00:55.688500 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-v75q2" Jan 27 14:00:56 crc kubenswrapper[4914]: I0127 14:00:56.245075 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5xdrv" event={"ID":"359f8636-db16-4be4-a671-e9d3fd1e8175","Type":"ContainerStarted","Data":"c7b5093bb0f6072a71bdb9ea812f485e3afb75d925e7f0a2cc090ad7d8df5f8c"} Jan 27 14:00:57 crc kubenswrapper[4914]: I0127 14:00:57.255753 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5xdrv" event={"ID":"359f8636-db16-4be4-a671-e9d3fd1e8175","Type":"ContainerStarted","Data":"b9ff233ab31757ff903a439452a80fa9b83b9cb570913a5b7259d0c549a1a061"} Jan 27 14:00:57 crc kubenswrapper[4914]: I0127 14:00:57.283657 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5xdrv" podStartSLOduration=2.302086027 podStartE2EDuration="3.283632644s" podCreationTimestamp="2026-01-27 14:00:54 +0000 UTC" firstStartedPulling="2026-01-27 14:00:55.306880218 +0000 UTC m=+1013.619230293" lastFinishedPulling="2026-01-27 14:00:56.288426795 +0000 UTC m=+1014.600776910" observedRunningTime="2026-01-27 14:00:57.275464152 +0000 UTC m=+1015.587814247" watchObservedRunningTime="2026-01-27 14:00:57.283632644 +0000 UTC m=+1015.595982739" Jan 27 14:00:59 crc kubenswrapper[4914]: I0127 14:00:59.140591 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-5xdrv"] Jan 27 14:00:59 crc kubenswrapper[4914]: I0127 14:00:59.270734 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-5xdrv" podUID="359f8636-db16-4be4-a671-e9d3fd1e8175" containerName="registry-server" containerID="cri-o://b9ff233ab31757ff903a439452a80fa9b83b9cb570913a5b7259d0c549a1a061" gracePeriod=2 Jan 27 14:00:59 crc kubenswrapper[4914]: I0127 14:00:59.741002 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5xdrv" Jan 27 14:00:59 crc kubenswrapper[4914]: I0127 14:00:59.755971 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nqpn7"] Jan 27 14:00:59 crc kubenswrapper[4914]: E0127 14:00:59.756289 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359f8636-db16-4be4-a671-e9d3fd1e8175" containerName="registry-server" Jan 27 14:00:59 crc kubenswrapper[4914]: I0127 14:00:59.756310 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="359f8636-db16-4be4-a671-e9d3fd1e8175" containerName="registry-server" Jan 27 14:00:59 crc kubenswrapper[4914]: I0127 14:00:59.756463 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="359f8636-db16-4be4-a671-e9d3fd1e8175" containerName="registry-server" Jan 27 14:00:59 crc kubenswrapper[4914]: I0127 14:00:59.756999 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nqpn7" Jan 27 14:00:59 crc kubenswrapper[4914]: I0127 14:00:59.763527 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nqpn7"] Jan 27 14:00:59 crc kubenswrapper[4914]: I0127 14:00:59.840561 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmt9s\" (UniqueName: \"kubernetes.io/projected/359f8636-db16-4be4-a671-e9d3fd1e8175-kube-api-access-zmt9s\") pod \"359f8636-db16-4be4-a671-e9d3fd1e8175\" (UID: \"359f8636-db16-4be4-a671-e9d3fd1e8175\") " Jan 27 14:00:59 crc kubenswrapper[4914]: I0127 14:00:59.840805 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7wt8\" (UniqueName: \"kubernetes.io/projected/fac137ad-f24e-4020-a9d4-118fd8cf2dd2-kube-api-access-d7wt8\") pod \"openstack-operator-index-nqpn7\" (UID: \"fac137ad-f24e-4020-a9d4-118fd8cf2dd2\") " pod="openstack-operators/openstack-operator-index-nqpn7" Jan 27 14:00:59 crc kubenswrapper[4914]: I0127 14:00:59.848035 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359f8636-db16-4be4-a671-e9d3fd1e8175-kube-api-access-zmt9s" (OuterVolumeSpecName: "kube-api-access-zmt9s") pod "359f8636-db16-4be4-a671-e9d3fd1e8175" (UID: "359f8636-db16-4be4-a671-e9d3fd1e8175"). InnerVolumeSpecName "kube-api-access-zmt9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:00:59 crc kubenswrapper[4914]: I0127 14:00:59.941753 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7wt8\" (UniqueName: \"kubernetes.io/projected/fac137ad-f24e-4020-a9d4-118fd8cf2dd2-kube-api-access-d7wt8\") pod \"openstack-operator-index-nqpn7\" (UID: \"fac137ad-f24e-4020-a9d4-118fd8cf2dd2\") " pod="openstack-operators/openstack-operator-index-nqpn7" Jan 27 14:00:59 crc kubenswrapper[4914]: I0127 14:00:59.941899 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmt9s\" (UniqueName: \"kubernetes.io/projected/359f8636-db16-4be4-a671-e9d3fd1e8175-kube-api-access-zmt9s\") on node \"crc\" DevicePath \"\"" Jan 27 14:00:59 crc kubenswrapper[4914]: I0127 14:00:59.959108 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7wt8\" (UniqueName: \"kubernetes.io/projected/fac137ad-f24e-4020-a9d4-118fd8cf2dd2-kube-api-access-d7wt8\") pod \"openstack-operator-index-nqpn7\" (UID: \"fac137ad-f24e-4020-a9d4-118fd8cf2dd2\") " pod="openstack-operators/openstack-operator-index-nqpn7" Jan 27 14:01:00 crc kubenswrapper[4914]: I0127 14:01:00.103127 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nqpn7" Jan 27 14:01:00 crc kubenswrapper[4914]: I0127 14:01:00.292567 4914 generic.go:334] "Generic (PLEG): container finished" podID="359f8636-db16-4be4-a671-e9d3fd1e8175" containerID="b9ff233ab31757ff903a439452a80fa9b83b9cb570913a5b7259d0c549a1a061" exitCode=0 Jan 27 14:01:00 crc kubenswrapper[4914]: I0127 14:01:00.292847 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5xdrv" event={"ID":"359f8636-db16-4be4-a671-e9d3fd1e8175","Type":"ContainerDied","Data":"b9ff233ab31757ff903a439452a80fa9b83b9cb570913a5b7259d0c549a1a061"} Jan 27 14:01:00 crc kubenswrapper[4914]: I0127 14:01:00.292878 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5xdrv" event={"ID":"359f8636-db16-4be4-a671-e9d3fd1e8175","Type":"ContainerDied","Data":"c7b5093bb0f6072a71bdb9ea812f485e3afb75d925e7f0a2cc090ad7d8df5f8c"} Jan 27 14:01:00 crc kubenswrapper[4914]: I0127 14:01:00.292897 4914 scope.go:117] "RemoveContainer" containerID="b9ff233ab31757ff903a439452a80fa9b83b9cb570913a5b7259d0c549a1a061" Jan 27 14:01:00 crc kubenswrapper[4914]: I0127 14:01:00.292996 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5xdrv" Jan 27 14:01:00 crc kubenswrapper[4914]: I0127 14:01:00.319703 4914 scope.go:117] "RemoveContainer" containerID="b9ff233ab31757ff903a439452a80fa9b83b9cb570913a5b7259d0c549a1a061" Jan 27 14:01:00 crc kubenswrapper[4914]: E0127 14:01:00.320339 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9ff233ab31757ff903a439452a80fa9b83b9cb570913a5b7259d0c549a1a061\": container with ID starting with b9ff233ab31757ff903a439452a80fa9b83b9cb570913a5b7259d0c549a1a061 not found: ID does not exist" containerID="b9ff233ab31757ff903a439452a80fa9b83b9cb570913a5b7259d0c549a1a061" Jan 27 14:01:00 crc kubenswrapper[4914]: I0127 14:01:00.320451 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9ff233ab31757ff903a439452a80fa9b83b9cb570913a5b7259d0c549a1a061"} err="failed to get container status \"b9ff233ab31757ff903a439452a80fa9b83b9cb570913a5b7259d0c549a1a061\": rpc error: code = NotFound desc = could not find container \"b9ff233ab31757ff903a439452a80fa9b83b9cb570913a5b7259d0c549a1a061\": container with ID starting with b9ff233ab31757ff903a439452a80fa9b83b9cb570913a5b7259d0c549a1a061 not found: ID does not exist" Jan 27 14:01:00 crc kubenswrapper[4914]: I0127 14:01:00.327102 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-5xdrv"] Jan 27 14:01:00 crc kubenswrapper[4914]: I0127 14:01:00.335619 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-5xdrv"] Jan 27 14:01:00 crc kubenswrapper[4914]: I0127 14:01:00.367270 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nqpn7"] Jan 27 14:01:00 crc kubenswrapper[4914]: W0127 14:01:00.373176 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfac137ad_f24e_4020_a9d4_118fd8cf2dd2.slice/crio-4fada192f97d1b39af133db77106cbc30f5823291c4a079ebdcfc3dffd11794f WatchSource:0}: Error finding container 4fada192f97d1b39af133db77106cbc30f5823291c4a079ebdcfc3dffd11794f: Status 404 returned error can't find the container with id 4fada192f97d1b39af133db77106cbc30f5823291c4a079ebdcfc3dffd11794f Jan 27 14:01:00 crc kubenswrapper[4914]: I0127 14:01:00.660420 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gxxf7" Jan 27 14:01:01 crc kubenswrapper[4914]: I0127 14:01:01.301697 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nqpn7" event={"ID":"fac137ad-f24e-4020-a9d4-118fd8cf2dd2","Type":"ContainerStarted","Data":"3dea318b23d02f5dd4e49b226f1564018d8d741293b1234ca3fa06ed04f09085"} Jan 27 14:01:01 crc kubenswrapper[4914]: I0127 14:01:01.302059 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nqpn7" event={"ID":"fac137ad-f24e-4020-a9d4-118fd8cf2dd2","Type":"ContainerStarted","Data":"4fada192f97d1b39af133db77106cbc30f5823291c4a079ebdcfc3dffd11794f"} Jan 27 14:01:01 crc kubenswrapper[4914]: I0127 14:01:01.318436 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nqpn7" podStartSLOduration=1.831994666 podStartE2EDuration="2.318416131s" podCreationTimestamp="2026-01-27 14:00:59 +0000 UTC" firstStartedPulling="2026-01-27 14:01:00.377892199 +0000 UTC m=+1018.690242284" lastFinishedPulling="2026-01-27 14:01:00.864313664 +0000 UTC m=+1019.176663749" observedRunningTime="2026-01-27 14:01:01.318123303 +0000 UTC m=+1019.630473418" watchObservedRunningTime="2026-01-27 14:01:01.318416131 +0000 UTC m=+1019.630766206" Jan 27 14:01:02 crc kubenswrapper[4914]: I0127 14:01:02.302727 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="359f8636-db16-4be4-a671-e9d3fd1e8175" path="/var/lib/kubelet/pods/359f8636-db16-4be4-a671-e9d3fd1e8175/volumes" Jan 27 14:01:10 crc kubenswrapper[4914]: I0127 14:01:10.104947 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nqpn7" Jan 27 14:01:10 crc kubenswrapper[4914]: I0127 14:01:10.105485 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nqpn7" Jan 27 14:01:10 crc kubenswrapper[4914]: I0127 14:01:10.132659 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nqpn7" Jan 27 14:01:10 crc kubenswrapper[4914]: I0127 14:01:10.417472 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nqpn7" Jan 27 14:01:10 crc kubenswrapper[4914]: I0127 14:01:10.646550 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-v75q2" Jan 27 14:01:11 crc kubenswrapper[4914]: I0127 14:01:11.587649 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng"] Jan 27 14:01:11 crc kubenswrapper[4914]: I0127 14:01:11.589625 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" Jan 27 14:01:11 crc kubenswrapper[4914]: I0127 14:01:11.595136 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-6jq65" Jan 27 14:01:11 crc kubenswrapper[4914]: I0127 14:01:11.596731 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng"] Jan 27 14:01:11 crc kubenswrapper[4914]: I0127 14:01:11.698344 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29f90886-e5f8-4c3a-8bff-1eea749b8e34-util\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng\" (UID: \"29f90886-e5f8-4c3a-8bff-1eea749b8e34\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" Jan 27 14:01:11 crc kubenswrapper[4914]: I0127 14:01:11.698467 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vffhn\" (UniqueName: \"kubernetes.io/projected/29f90886-e5f8-4c3a-8bff-1eea749b8e34-kube-api-access-vffhn\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng\" (UID: \"29f90886-e5f8-4c3a-8bff-1eea749b8e34\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" Jan 27 14:01:11 crc kubenswrapper[4914]: I0127 14:01:11.698492 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29f90886-e5f8-4c3a-8bff-1eea749b8e34-bundle\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng\" (UID: \"29f90886-e5f8-4c3a-8bff-1eea749b8e34\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" Jan 27 14:01:11 crc kubenswrapper[4914]: I0127 14:01:11.799206 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vffhn\" (UniqueName: \"kubernetes.io/projected/29f90886-e5f8-4c3a-8bff-1eea749b8e34-kube-api-access-vffhn\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng\" (UID: \"29f90886-e5f8-4c3a-8bff-1eea749b8e34\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" Jan 27 14:01:11 crc kubenswrapper[4914]: I0127 14:01:11.799253 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29f90886-e5f8-4c3a-8bff-1eea749b8e34-bundle\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng\" (UID: \"29f90886-e5f8-4c3a-8bff-1eea749b8e34\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" Jan 27 14:01:11 crc kubenswrapper[4914]: I0127 14:01:11.799294 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29f90886-e5f8-4c3a-8bff-1eea749b8e34-util\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng\" (UID: \"29f90886-e5f8-4c3a-8bff-1eea749b8e34\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" Jan 27 14:01:11 crc kubenswrapper[4914]: I0127 14:01:11.799777 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29f90886-e5f8-4c3a-8bff-1eea749b8e34-util\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng\" (UID: \"29f90886-e5f8-4c3a-8bff-1eea749b8e34\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" Jan 27 14:01:11 crc kubenswrapper[4914]: I0127 14:01:11.799931 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29f90886-e5f8-4c3a-8bff-1eea749b8e34-bundle\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng\" (UID: \"29f90886-e5f8-4c3a-8bff-1eea749b8e34\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" Jan 27 14:01:11 crc kubenswrapper[4914]: I0127 14:01:11.819974 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vffhn\" (UniqueName: \"kubernetes.io/projected/29f90886-e5f8-4c3a-8bff-1eea749b8e34-kube-api-access-vffhn\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng\" (UID: \"29f90886-e5f8-4c3a-8bff-1eea749b8e34\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" Jan 27 14:01:11 crc kubenswrapper[4914]: I0127 14:01:11.914919 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" Jan 27 14:01:12 crc kubenswrapper[4914]: I0127 14:01:12.318931 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng"] Jan 27 14:01:12 crc kubenswrapper[4914]: W0127 14:01:12.326010 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29f90886_e5f8_4c3a_8bff_1eea749b8e34.slice/crio-9edb2dcb1949d2d85b4d74c167b4dda007f8f2cdb23f291e77a90c9ab29e0ac3 WatchSource:0}: Error finding container 9edb2dcb1949d2d85b4d74c167b4dda007f8f2cdb23f291e77a90c9ab29e0ac3: Status 404 returned error can't find the container with id 9edb2dcb1949d2d85b4d74c167b4dda007f8f2cdb23f291e77a90c9ab29e0ac3 Jan 27 14:01:12 crc kubenswrapper[4914]: I0127 14:01:12.392528 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" event={"ID":"29f90886-e5f8-4c3a-8bff-1eea749b8e34","Type":"ContainerStarted","Data":"9edb2dcb1949d2d85b4d74c167b4dda007f8f2cdb23f291e77a90c9ab29e0ac3"} Jan 27 14:01:13 crc kubenswrapper[4914]: I0127 14:01:13.400373 4914 generic.go:334] "Generic (PLEG): container finished" podID="29f90886-e5f8-4c3a-8bff-1eea749b8e34" containerID="4b68ddc0796612791daebee0a9f9ef4b68bd9f6c362659e27887271069bcc5ee" exitCode=0 Jan 27 14:01:13 crc kubenswrapper[4914]: I0127 14:01:13.400418 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" event={"ID":"29f90886-e5f8-4c3a-8bff-1eea749b8e34","Type":"ContainerDied","Data":"4b68ddc0796612791daebee0a9f9ef4b68bd9f6c362659e27887271069bcc5ee"} Jan 27 14:01:14 crc kubenswrapper[4914]: I0127 14:01:14.409810 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" event={"ID":"29f90886-e5f8-4c3a-8bff-1eea749b8e34","Type":"ContainerStarted","Data":"705234f67cb86b17047f0a6435f3076c834fc697bd83bdc19fdd72c71ce770d9"} Jan 27 14:01:15 crc kubenswrapper[4914]: I0127 14:01:15.416467 4914 generic.go:334] "Generic (PLEG): container finished" podID="29f90886-e5f8-4c3a-8bff-1eea749b8e34" containerID="705234f67cb86b17047f0a6435f3076c834fc697bd83bdc19fdd72c71ce770d9" exitCode=0 Jan 27 14:01:15 crc kubenswrapper[4914]: I0127 14:01:15.416526 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" event={"ID":"29f90886-e5f8-4c3a-8bff-1eea749b8e34","Type":"ContainerDied","Data":"705234f67cb86b17047f0a6435f3076c834fc697bd83bdc19fdd72c71ce770d9"} Jan 27 14:01:16 crc kubenswrapper[4914]: I0127 14:01:16.149062 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5kd64"] Jan 27 14:01:16 crc kubenswrapper[4914]: I0127 14:01:16.150853 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:16 crc kubenswrapper[4914]: I0127 14:01:16.161961 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5kd64"] Jan 27 14:01:16 crc kubenswrapper[4914]: I0127 14:01:16.259674 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c8wz\" (UniqueName: \"kubernetes.io/projected/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-kube-api-access-6c8wz\") pod \"community-operators-5kd64\" (UID: \"f6a6f014-bf9d-4477-828a-7c2ac9080fb2\") " pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:16 crc kubenswrapper[4914]: I0127 14:01:16.259757 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-utilities\") pod \"community-operators-5kd64\" (UID: \"f6a6f014-bf9d-4477-828a-7c2ac9080fb2\") " pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:16 crc kubenswrapper[4914]: I0127 14:01:16.259862 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-catalog-content\") pod \"community-operators-5kd64\" (UID: \"f6a6f014-bf9d-4477-828a-7c2ac9080fb2\") " pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:16 crc kubenswrapper[4914]: I0127 14:01:16.361548 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-catalog-content\") pod \"community-operators-5kd64\" (UID: \"f6a6f014-bf9d-4477-828a-7c2ac9080fb2\") " pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:16 crc kubenswrapper[4914]: I0127 14:01:16.361646 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c8wz\" (UniqueName: \"kubernetes.io/projected/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-kube-api-access-6c8wz\") pod \"community-operators-5kd64\" (UID: \"f6a6f014-bf9d-4477-828a-7c2ac9080fb2\") " pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:16 crc kubenswrapper[4914]: I0127 14:01:16.361716 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-utilities\") pod \"community-operators-5kd64\" (UID: \"f6a6f014-bf9d-4477-828a-7c2ac9080fb2\") " pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:16 crc kubenswrapper[4914]: I0127 14:01:16.362220 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-utilities\") pod \"community-operators-5kd64\" (UID: \"f6a6f014-bf9d-4477-828a-7c2ac9080fb2\") " pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:16 crc kubenswrapper[4914]: I0127 14:01:16.362223 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-catalog-content\") pod \"community-operators-5kd64\" (UID: \"f6a6f014-bf9d-4477-828a-7c2ac9080fb2\") " pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:16 crc kubenswrapper[4914]: I0127 14:01:16.380851 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c8wz\" (UniqueName: \"kubernetes.io/projected/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-kube-api-access-6c8wz\") pod \"community-operators-5kd64\" (UID: \"f6a6f014-bf9d-4477-828a-7c2ac9080fb2\") " pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:16 crc kubenswrapper[4914]: I0127 14:01:16.468387 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:16 crc kubenswrapper[4914]: I0127 14:01:16.740378 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5kd64"] Jan 27 14:01:16 crc kubenswrapper[4914]: W0127 14:01:16.745277 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6a6f014_bf9d_4477_828a_7c2ac9080fb2.slice/crio-6c8823d9adfe3c4bd1d3a2a7ae20fce4184110b9817138c62e58f4bc9e7d5ca3 WatchSource:0}: Error finding container 6c8823d9adfe3c4bd1d3a2a7ae20fce4184110b9817138c62e58f4bc9e7d5ca3: Status 404 returned error can't find the container with id 6c8823d9adfe3c4bd1d3a2a7ae20fce4184110b9817138c62e58f4bc9e7d5ca3 Jan 27 14:01:17 crc kubenswrapper[4914]: I0127 14:01:17.427504 4914 generic.go:334] "Generic (PLEG): container finished" podID="f6a6f014-bf9d-4477-828a-7c2ac9080fb2" containerID="9f62f6f91775dfc1865f70ebabccd6d8b61472130ed66a65178f9aee2a34777a" exitCode=0 Jan 27 14:01:17 crc kubenswrapper[4914]: I0127 14:01:17.427561 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kd64" event={"ID":"f6a6f014-bf9d-4477-828a-7c2ac9080fb2","Type":"ContainerDied","Data":"9f62f6f91775dfc1865f70ebabccd6d8b61472130ed66a65178f9aee2a34777a"} Jan 27 14:01:17 crc kubenswrapper[4914]: I0127 14:01:17.427588 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kd64" event={"ID":"f6a6f014-bf9d-4477-828a-7c2ac9080fb2","Type":"ContainerStarted","Data":"6c8823d9adfe3c4bd1d3a2a7ae20fce4184110b9817138c62e58f4bc9e7d5ca3"} Jan 27 14:01:17 crc kubenswrapper[4914]: I0127 14:01:17.431067 4914 generic.go:334] "Generic (PLEG): container finished" podID="29f90886-e5f8-4c3a-8bff-1eea749b8e34" containerID="1b6e67597b6c87062eea904157d5eed72503a24476c88218e42ab4afe76f6b8f" exitCode=0 Jan 27 14:01:17 crc kubenswrapper[4914]: I0127 14:01:17.431112 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" event={"ID":"29f90886-e5f8-4c3a-8bff-1eea749b8e34","Type":"ContainerDied","Data":"1b6e67597b6c87062eea904157d5eed72503a24476c88218e42ab4afe76f6b8f"} Jan 27 14:01:18 crc kubenswrapper[4914]: I0127 14:01:18.440110 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kd64" event={"ID":"f6a6f014-bf9d-4477-828a-7c2ac9080fb2","Type":"ContainerStarted","Data":"a8ef948372e162f1f2f7aefd7b384190863795854b782bb46bd8b5b931161056"} Jan 27 14:01:18 crc kubenswrapper[4914]: I0127 14:01:18.683559 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" Jan 27 14:01:18 crc kubenswrapper[4914]: I0127 14:01:18.795253 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vffhn\" (UniqueName: \"kubernetes.io/projected/29f90886-e5f8-4c3a-8bff-1eea749b8e34-kube-api-access-vffhn\") pod \"29f90886-e5f8-4c3a-8bff-1eea749b8e34\" (UID: \"29f90886-e5f8-4c3a-8bff-1eea749b8e34\") " Jan 27 14:01:18 crc kubenswrapper[4914]: I0127 14:01:18.795644 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29f90886-e5f8-4c3a-8bff-1eea749b8e34-bundle\") pod \"29f90886-e5f8-4c3a-8bff-1eea749b8e34\" (UID: \"29f90886-e5f8-4c3a-8bff-1eea749b8e34\") " Jan 27 14:01:18 crc kubenswrapper[4914]: I0127 14:01:18.795690 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29f90886-e5f8-4c3a-8bff-1eea749b8e34-util\") pod \"29f90886-e5f8-4c3a-8bff-1eea749b8e34\" (UID: \"29f90886-e5f8-4c3a-8bff-1eea749b8e34\") " Jan 27 14:01:18 crc kubenswrapper[4914]: I0127 14:01:18.796812 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29f90886-e5f8-4c3a-8bff-1eea749b8e34-bundle" (OuterVolumeSpecName: "bundle") pod "29f90886-e5f8-4c3a-8bff-1eea749b8e34" (UID: "29f90886-e5f8-4c3a-8bff-1eea749b8e34"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:01:18 crc kubenswrapper[4914]: I0127 14:01:18.800788 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f90886-e5f8-4c3a-8bff-1eea749b8e34-kube-api-access-vffhn" (OuterVolumeSpecName: "kube-api-access-vffhn") pod "29f90886-e5f8-4c3a-8bff-1eea749b8e34" (UID: "29f90886-e5f8-4c3a-8bff-1eea749b8e34"). InnerVolumeSpecName "kube-api-access-vffhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:01:18 crc kubenswrapper[4914]: I0127 14:01:18.806008 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29f90886-e5f8-4c3a-8bff-1eea749b8e34-util" (OuterVolumeSpecName: "util") pod "29f90886-e5f8-4c3a-8bff-1eea749b8e34" (UID: "29f90886-e5f8-4c3a-8bff-1eea749b8e34"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:01:18 crc kubenswrapper[4914]: I0127 14:01:18.897583 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vffhn\" (UniqueName: \"kubernetes.io/projected/29f90886-e5f8-4c3a-8bff-1eea749b8e34-kube-api-access-vffhn\") on node \"crc\" DevicePath \"\"" Jan 27 14:01:18 crc kubenswrapper[4914]: I0127 14:01:18.897622 4914 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29f90886-e5f8-4c3a-8bff-1eea749b8e34-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:01:18 crc kubenswrapper[4914]: I0127 14:01:18.897630 4914 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29f90886-e5f8-4c3a-8bff-1eea749b8e34-util\") on node \"crc\" DevicePath \"\"" Jan 27 14:01:19 crc kubenswrapper[4914]: I0127 14:01:19.448699 4914 generic.go:334] "Generic (PLEG): container finished" podID="f6a6f014-bf9d-4477-828a-7c2ac9080fb2" containerID="a8ef948372e162f1f2f7aefd7b384190863795854b782bb46bd8b5b931161056" exitCode=0 Jan 27 14:01:19 crc kubenswrapper[4914]: I0127 14:01:19.448814 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kd64" event={"ID":"f6a6f014-bf9d-4477-828a-7c2ac9080fb2","Type":"ContainerDied","Data":"a8ef948372e162f1f2f7aefd7b384190863795854b782bb46bd8b5b931161056"} Jan 27 14:01:19 crc kubenswrapper[4914]: I0127 14:01:19.452225 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" event={"ID":"29f90886-e5f8-4c3a-8bff-1eea749b8e34","Type":"ContainerDied","Data":"9edb2dcb1949d2d85b4d74c167b4dda007f8f2cdb23f291e77a90c9ab29e0ac3"} Jan 27 14:01:19 crc kubenswrapper[4914]: I0127 14:01:19.452260 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9edb2dcb1949d2d85b4d74c167b4dda007f8f2cdb23f291e77a90c9ab29e0ac3" Jan 27 14:01:19 crc kubenswrapper[4914]: I0127 14:01:19.452316 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng" Jan 27 14:01:20 crc kubenswrapper[4914]: I0127 14:01:20.460915 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kd64" event={"ID":"f6a6f014-bf9d-4477-828a-7c2ac9080fb2","Type":"ContainerStarted","Data":"e4ccf9854efef6dc942c845d64af44034b7368bc31c0bccd48d89f54ba1890d6"} Jan 27 14:01:20 crc kubenswrapper[4914]: I0127 14:01:20.484255 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5kd64" podStartSLOduration=1.968605277 podStartE2EDuration="4.484235702s" podCreationTimestamp="2026-01-27 14:01:16 +0000 UTC" firstStartedPulling="2026-01-27 14:01:17.429408487 +0000 UTC m=+1035.741758572" lastFinishedPulling="2026-01-27 14:01:19.945038912 +0000 UTC m=+1038.257388997" observedRunningTime="2026-01-27 14:01:20.480861101 +0000 UTC m=+1038.793211226" watchObservedRunningTime="2026-01-27 14:01:20.484235702 +0000 UTC m=+1038.796585787" Jan 27 14:01:23 crc kubenswrapper[4914]: I0127 14:01:23.085817 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bfcf7b875-vf68w"] Jan 27 14:01:23 crc kubenswrapper[4914]: E0127 14:01:23.086153 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f90886-e5f8-4c3a-8bff-1eea749b8e34" containerName="util" Jan 27 14:01:23 crc kubenswrapper[4914]: I0127 14:01:23.086170 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f90886-e5f8-4c3a-8bff-1eea749b8e34" containerName="util" Jan 27 14:01:23 crc kubenswrapper[4914]: E0127 14:01:23.086183 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f90886-e5f8-4c3a-8bff-1eea749b8e34" containerName="pull" Jan 27 14:01:23 crc kubenswrapper[4914]: I0127 14:01:23.086190 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f90886-e5f8-4c3a-8bff-1eea749b8e34" containerName="pull" Jan 27 14:01:23 crc kubenswrapper[4914]: E0127 14:01:23.086217 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f90886-e5f8-4c3a-8bff-1eea749b8e34" containerName="extract" Jan 27 14:01:23 crc kubenswrapper[4914]: I0127 14:01:23.086224 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f90886-e5f8-4c3a-8bff-1eea749b8e34" containerName="extract" Jan 27 14:01:23 crc kubenswrapper[4914]: I0127 14:01:23.086378 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="29f90886-e5f8-4c3a-8bff-1eea749b8e34" containerName="extract" Jan 27 14:01:23 crc kubenswrapper[4914]: I0127 14:01:23.086912 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-vf68w" Jan 27 14:01:23 crc kubenswrapper[4914]: I0127 14:01:23.090424 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-pfbdx" Jan 27 14:01:23 crc kubenswrapper[4914]: I0127 14:01:23.127003 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bfcf7b875-vf68w"] Jan 27 14:01:23 crc kubenswrapper[4914]: I0127 14:01:23.257668 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfcxt\" (UniqueName: \"kubernetes.io/projected/81257126-8f49-4586-9772-3f22b3e82782-kube-api-access-sfcxt\") pod \"openstack-operator-controller-init-6bfcf7b875-vf68w\" (UID: \"81257126-8f49-4586-9772-3f22b3e82782\") " pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-vf68w" Jan 27 14:01:23 crc kubenswrapper[4914]: I0127 14:01:23.359396 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfcxt\" (UniqueName: \"kubernetes.io/projected/81257126-8f49-4586-9772-3f22b3e82782-kube-api-access-sfcxt\") pod \"openstack-operator-controller-init-6bfcf7b875-vf68w\" (UID: \"81257126-8f49-4586-9772-3f22b3e82782\") " pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-vf68w" Jan 27 14:01:23 crc kubenswrapper[4914]: I0127 14:01:23.378702 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfcxt\" (UniqueName: \"kubernetes.io/projected/81257126-8f49-4586-9772-3f22b3e82782-kube-api-access-sfcxt\") pod \"openstack-operator-controller-init-6bfcf7b875-vf68w\" (UID: \"81257126-8f49-4586-9772-3f22b3e82782\") " pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-vf68w" Jan 27 14:01:23 crc kubenswrapper[4914]: I0127 14:01:23.405433 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-vf68w" Jan 27 14:01:23 crc kubenswrapper[4914]: I0127 14:01:23.598940 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bfcf7b875-vf68w"] Jan 27 14:01:24 crc kubenswrapper[4914]: I0127 14:01:24.485027 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-vf68w" event={"ID":"81257126-8f49-4586-9772-3f22b3e82782","Type":"ContainerStarted","Data":"c6d0f68def3375e1079fd8b21ac322480633fffe61d7f15ca58141f5d6def330"} Jan 27 14:01:26 crc kubenswrapper[4914]: I0127 14:01:26.469395 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:26 crc kubenswrapper[4914]: I0127 14:01:26.469879 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:26 crc kubenswrapper[4914]: I0127 14:01:26.509485 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:26 crc kubenswrapper[4914]: I0127 14:01:26.550203 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:27 crc kubenswrapper[4914]: I0127 14:01:27.742111 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5kd64"] Jan 27 14:01:28 crc kubenswrapper[4914]: I0127 14:01:28.511157 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5kd64" podUID="f6a6f014-bf9d-4477-828a-7c2ac9080fb2" containerName="registry-server" containerID="cri-o://e4ccf9854efef6dc942c845d64af44034b7368bc31c0bccd48d89f54ba1890d6" gracePeriod=2 Jan 27 14:01:28 crc kubenswrapper[4914]: I0127 14:01:28.890968 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.032631 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-catalog-content\") pod \"f6a6f014-bf9d-4477-828a-7c2ac9080fb2\" (UID: \"f6a6f014-bf9d-4477-828a-7c2ac9080fb2\") " Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.032707 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c8wz\" (UniqueName: \"kubernetes.io/projected/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-kube-api-access-6c8wz\") pod \"f6a6f014-bf9d-4477-828a-7c2ac9080fb2\" (UID: \"f6a6f014-bf9d-4477-828a-7c2ac9080fb2\") " Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.032751 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-utilities\") pod \"f6a6f014-bf9d-4477-828a-7c2ac9080fb2\" (UID: \"f6a6f014-bf9d-4477-828a-7c2ac9080fb2\") " Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.033784 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-utilities" (OuterVolumeSpecName: "utilities") pod "f6a6f014-bf9d-4477-828a-7c2ac9080fb2" (UID: "f6a6f014-bf9d-4477-828a-7c2ac9080fb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.039082 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-kube-api-access-6c8wz" (OuterVolumeSpecName: "kube-api-access-6c8wz") pod "f6a6f014-bf9d-4477-828a-7c2ac9080fb2" (UID: "f6a6f014-bf9d-4477-828a-7c2ac9080fb2"). InnerVolumeSpecName "kube-api-access-6c8wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.081198 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6a6f014-bf9d-4477-828a-7c2ac9080fb2" (UID: "f6a6f014-bf9d-4477-828a-7c2ac9080fb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.134768 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.134806 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c8wz\" (UniqueName: \"kubernetes.io/projected/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-kube-api-access-6c8wz\") on node \"crc\" DevicePath \"\"" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.134818 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a6f014-bf9d-4477-828a-7c2ac9080fb2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.518262 4914 generic.go:334] "Generic (PLEG): container finished" podID="f6a6f014-bf9d-4477-828a-7c2ac9080fb2" containerID="e4ccf9854efef6dc942c845d64af44034b7368bc31c0bccd48d89f54ba1890d6" exitCode=0 Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.518331 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kd64" event={"ID":"f6a6f014-bf9d-4477-828a-7c2ac9080fb2","Type":"ContainerDied","Data":"e4ccf9854efef6dc942c845d64af44034b7368bc31c0bccd48d89f54ba1890d6"} Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.518360 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kd64" event={"ID":"f6a6f014-bf9d-4477-828a-7c2ac9080fb2","Type":"ContainerDied","Data":"6c8823d9adfe3c4bd1d3a2a7ae20fce4184110b9817138c62e58f4bc9e7d5ca3"} Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.518367 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kd64" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.518375 4914 scope.go:117] "RemoveContainer" containerID="e4ccf9854efef6dc942c845d64af44034b7368bc31c0bccd48d89f54ba1890d6" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.519989 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-vf68w" event={"ID":"81257126-8f49-4586-9772-3f22b3e82782","Type":"ContainerStarted","Data":"4532f75548d52f4f74e6994f36620f1dc0fd493dd8b130cd4a8bf5d925e64b42"} Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.520496 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-vf68w" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.537555 4914 scope.go:117] "RemoveContainer" containerID="a8ef948372e162f1f2f7aefd7b384190863795854b782bb46bd8b5b931161056" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.566524 4914 scope.go:117] "RemoveContainer" containerID="9f62f6f91775dfc1865f70ebabccd6d8b61472130ed66a65178f9aee2a34777a" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.572600 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-vf68w" podStartSLOduration=1.550019425 podStartE2EDuration="6.572579363s" podCreationTimestamp="2026-01-27 14:01:23 +0000 UTC" firstStartedPulling="2026-01-27 14:01:23.619504754 +0000 UTC m=+1041.931854839" lastFinishedPulling="2026-01-27 14:01:28.642064692 +0000 UTC m=+1046.954414777" observedRunningTime="2026-01-27 14:01:29.553000551 +0000 UTC m=+1047.865350646" watchObservedRunningTime="2026-01-27 14:01:29.572579363 +0000 UTC m=+1047.884929458" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.579385 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5kd64"] Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.585162 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5kd64"] Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.588119 4914 scope.go:117] "RemoveContainer" containerID="e4ccf9854efef6dc942c845d64af44034b7368bc31c0bccd48d89f54ba1890d6" Jan 27 14:01:29 crc kubenswrapper[4914]: E0127 14:01:29.588552 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4ccf9854efef6dc942c845d64af44034b7368bc31c0bccd48d89f54ba1890d6\": container with ID starting with e4ccf9854efef6dc942c845d64af44034b7368bc31c0bccd48d89f54ba1890d6 not found: ID does not exist" containerID="e4ccf9854efef6dc942c845d64af44034b7368bc31c0bccd48d89f54ba1890d6" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.588658 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4ccf9854efef6dc942c845d64af44034b7368bc31c0bccd48d89f54ba1890d6"} err="failed to get container status \"e4ccf9854efef6dc942c845d64af44034b7368bc31c0bccd48d89f54ba1890d6\": rpc error: code = NotFound desc = could not find container \"e4ccf9854efef6dc942c845d64af44034b7368bc31c0bccd48d89f54ba1890d6\": container with ID starting with e4ccf9854efef6dc942c845d64af44034b7368bc31c0bccd48d89f54ba1890d6 not found: ID does not exist" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.588742 4914 scope.go:117] "RemoveContainer" containerID="a8ef948372e162f1f2f7aefd7b384190863795854b782bb46bd8b5b931161056" Jan 27 14:01:29 crc kubenswrapper[4914]: E0127 14:01:29.588986 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8ef948372e162f1f2f7aefd7b384190863795854b782bb46bd8b5b931161056\": container with ID starting with a8ef948372e162f1f2f7aefd7b384190863795854b782bb46bd8b5b931161056 not found: ID does not exist" containerID="a8ef948372e162f1f2f7aefd7b384190863795854b782bb46bd8b5b931161056" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.589063 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8ef948372e162f1f2f7aefd7b384190863795854b782bb46bd8b5b931161056"} err="failed to get container status \"a8ef948372e162f1f2f7aefd7b384190863795854b782bb46bd8b5b931161056\": rpc error: code = NotFound desc = could not find container \"a8ef948372e162f1f2f7aefd7b384190863795854b782bb46bd8b5b931161056\": container with ID starting with a8ef948372e162f1f2f7aefd7b384190863795854b782bb46bd8b5b931161056 not found: ID does not exist" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.589136 4914 scope.go:117] "RemoveContainer" containerID="9f62f6f91775dfc1865f70ebabccd6d8b61472130ed66a65178f9aee2a34777a" Jan 27 14:01:29 crc kubenswrapper[4914]: E0127 14:01:29.589350 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f62f6f91775dfc1865f70ebabccd6d8b61472130ed66a65178f9aee2a34777a\": container with ID starting with 9f62f6f91775dfc1865f70ebabccd6d8b61472130ed66a65178f9aee2a34777a not found: ID does not exist" containerID="9f62f6f91775dfc1865f70ebabccd6d8b61472130ed66a65178f9aee2a34777a" Jan 27 14:01:29 crc kubenswrapper[4914]: I0127 14:01:29.589426 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f62f6f91775dfc1865f70ebabccd6d8b61472130ed66a65178f9aee2a34777a"} err="failed to get container status \"9f62f6f91775dfc1865f70ebabccd6d8b61472130ed66a65178f9aee2a34777a\": rpc error: code = NotFound desc = could not find container \"9f62f6f91775dfc1865f70ebabccd6d8b61472130ed66a65178f9aee2a34777a\": container with ID starting with 9f62f6f91775dfc1865f70ebabccd6d8b61472130ed66a65178f9aee2a34777a not found: ID does not exist" Jan 27 14:01:30 crc kubenswrapper[4914]: I0127 14:01:30.302490 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6a6f014-bf9d-4477-828a-7c2ac9080fb2" path="/var/lib/kubelet/pods/f6a6f014-bf9d-4477-828a-7c2ac9080fb2/volumes" Jan 27 14:01:33 crc kubenswrapper[4914]: I0127 14:01:33.407810 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-vf68w" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.333817 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75b8f798ff-wq8mf"] Jan 27 14:01:52 crc kubenswrapper[4914]: E0127 14:01:52.334712 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a6f014-bf9d-4477-828a-7c2ac9080fb2" containerName="extract-utilities" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.334732 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a6f014-bf9d-4477-828a-7c2ac9080fb2" containerName="extract-utilities" Jan 27 14:01:52 crc kubenswrapper[4914]: E0127 14:01:52.334754 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a6f014-bf9d-4477-828a-7c2ac9080fb2" containerName="registry-server" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.334763 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a6f014-bf9d-4477-828a-7c2ac9080fb2" containerName="registry-server" Jan 27 14:01:52 crc kubenswrapper[4914]: E0127 14:01:52.334782 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a6f014-bf9d-4477-828a-7c2ac9080fb2" containerName="extract-content" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.334790 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a6f014-bf9d-4477-828a-7c2ac9080fb2" containerName="extract-content" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.334957 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6a6f014-bf9d-4477-828a-7c2ac9080fb2" containerName="registry-server" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.335485 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-wq8mf" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.338754 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-4dncj" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.344974 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5fdc687f5-z4dmz"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.345875 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-z4dmz" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.350772 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75b8f798ff-wq8mf"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.354415 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-vvqgb" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.355824 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-76d4d5b8f9-2zr2g"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.356764 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-2zr2g" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.364023 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-76d4d5b8f9-2zr2g"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.375589 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5fdc687f5-z4dmz"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.382279 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-pgsht" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.415391 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-84d5bb46b-624jw"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.423715 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-624jw" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.427122 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-dnzgt"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.427247 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-zkf2q" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.428101 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-dnzgt" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.432315 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-7cbvv" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.437322 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84d5bb46b-624jw"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.450719 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-dnzgt"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.457870 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-tdhdr"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.458852 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-tdhdr" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.462676 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-vdkv6" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.469918 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-tdhdr"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.474765 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t6j9\" (UniqueName: \"kubernetes.io/projected/0093b4bf-5086-4bae-adbb-1e18935cc19a-kube-api-access-9t6j9\") pod \"barbican-operator-controller-manager-75b8f798ff-wq8mf\" (UID: \"0093b4bf-5086-4bae-adbb-1e18935cc19a\") " pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-wq8mf" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.475115 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42d65\" (UniqueName: \"kubernetes.io/projected/038a0f9d-802c-4615-bd9f-82f843988bcb-kube-api-access-42d65\") pod \"cinder-operator-controller-manager-5fdc687f5-z4dmz\" (UID: \"038a0f9d-802c-4615-bd9f-82f843988bcb\") " pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-z4dmz" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.475301 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npz4t\" (UniqueName: \"kubernetes.io/projected/8bd09249-97f3-4b92-a829-c6f70919052a-kube-api-access-npz4t\") pod \"designate-operator-controller-manager-76d4d5b8f9-2zr2g\" (UID: \"8bd09249-97f3-4b92-a829-c6f70919052a\") " pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-2zr2g" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.483866 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.485186 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.488131 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-6xwq5" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.488556 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.511874 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-58865f87b4-ktfq5"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.523556 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-ktfq5" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.541099 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-gxd95" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.564264 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.576886 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7vrs\" (UniqueName: \"kubernetes.io/projected/d32a4b7b-f918-44bb-86a2-95d862a35727-kube-api-access-t7vrs\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-tdhdr\" (UID: \"d32a4b7b-f918-44bb-86a2-95d862a35727\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-tdhdr" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.576955 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtjv2\" (UniqueName: \"kubernetes.io/projected/63b541a1-cc9f-41ea-8da8-c219f9fff59b-kube-api-access-mtjv2\") pod \"glance-operator-controller-manager-84d5bb46b-624jw\" (UID: \"63b541a1-cc9f-41ea-8da8-c219f9fff59b\") " pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-624jw" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.576987 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npz4t\" (UniqueName: \"kubernetes.io/projected/8bd09249-97f3-4b92-a829-c6f70919052a-kube-api-access-npz4t\") pod \"designate-operator-controller-manager-76d4d5b8f9-2zr2g\" (UID: \"8bd09249-97f3-4b92-a829-c6f70919052a\") " pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-2zr2g" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.577020 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t6j9\" (UniqueName: \"kubernetes.io/projected/0093b4bf-5086-4bae-adbb-1e18935cc19a-kube-api-access-9t6j9\") pod \"barbican-operator-controller-manager-75b8f798ff-wq8mf\" (UID: \"0093b4bf-5086-4bae-adbb-1e18935cc19a\") " pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-wq8mf" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.577045 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wzfv\" (UniqueName: \"kubernetes.io/projected/6f5f5515-e498-41a2-8433-5ccec9325ff0-kube-api-access-9wzfv\") pod \"heat-operator-controller-manager-658dd65b86-dnzgt\" (UID: \"6f5f5515-e498-41a2-8433-5ccec9325ff0\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-dnzgt" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.577088 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd9mq\" (UniqueName: \"kubernetes.io/projected/81865888-d857-481d-bcd5-b5e9e17d4b7d-kube-api-access-rd9mq\") pod \"infra-operator-controller-manager-54ccf4f85d-fv4pk\" (UID: \"81865888-d857-481d-bcd5-b5e9e17d4b7d\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.577117 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42d65\" (UniqueName: \"kubernetes.io/projected/038a0f9d-802c-4615-bd9f-82f843988bcb-kube-api-access-42d65\") pod \"cinder-operator-controller-manager-5fdc687f5-z4dmz\" (UID: \"038a0f9d-802c-4615-bd9f-82f843988bcb\") " pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-z4dmz" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.577144 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-fv4pk\" (UID: \"81865888-d857-481d-bcd5-b5e9e17d4b7d\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.579560 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-58865f87b4-ktfq5"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.585928 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78f8b7b89c-56q9z"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.587282 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-56q9z" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.590635 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-mjz97" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.595269 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78b8f8fd84-64vmw"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.596138 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-64vmw" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.603296 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-p4cls" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.611360 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t6j9\" (UniqueName: \"kubernetes.io/projected/0093b4bf-5086-4bae-adbb-1e18935cc19a-kube-api-access-9t6j9\") pod \"barbican-operator-controller-manager-75b8f798ff-wq8mf\" (UID: \"0093b4bf-5086-4bae-adbb-1e18935cc19a\") " pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-wq8mf" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.612056 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42d65\" (UniqueName: \"kubernetes.io/projected/038a0f9d-802c-4615-bd9f-82f843988bcb-kube-api-access-42d65\") pod \"cinder-operator-controller-manager-5fdc687f5-z4dmz\" (UID: \"038a0f9d-802c-4615-bd9f-82f843988bcb\") " pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-z4dmz" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.614758 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78b8f8fd84-64vmw"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.620063 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npz4t\" (UniqueName: \"kubernetes.io/projected/8bd09249-97f3-4b92-a829-c6f70919052a-kube-api-access-npz4t\") pod \"designate-operator-controller-manager-76d4d5b8f9-2zr2g\" (UID: \"8bd09249-97f3-4b92-a829-c6f70919052a\") " pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-2zr2g" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.621702 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78f8b7b89c-56q9z"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.625616 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-569695f6c5-g26rc"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.626632 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-g26rc" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.634240 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xqkth" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.637639 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-569695f6c5-g26rc"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.642366 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-74ffd97575-94b6l"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.643372 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-94b6l" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.645315 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-wcrds" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.650666 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-5d4x9"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.651863 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-5d4x9" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.654117 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-wg27z" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.658172 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74ffd97575-94b6l"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.662820 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-wq8mf" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.667579 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-5d4x9"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.676876 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bf4858b78-kjdk7"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.677647 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-kjdk7" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.678700 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd9mq\" (UniqueName: \"kubernetes.io/projected/81865888-d857-481d-bcd5-b5e9e17d4b7d-kube-api-access-rd9mq\") pod \"infra-operator-controller-manager-54ccf4f85d-fv4pk\" (UID: \"81865888-d857-481d-bcd5-b5e9e17d4b7d\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.678745 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-fv4pk\" (UID: \"81865888-d857-481d-bcd5-b5e9e17d4b7d\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.678823 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cshc\" (UniqueName: \"kubernetes.io/projected/ed141eef-7122-41b3-9798-e74d82785c1d-kube-api-access-9cshc\") pod \"keystone-operator-controller-manager-78f8b7b89c-56q9z\" (UID: \"ed141eef-7122-41b3-9798-e74d82785c1d\") " pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-56q9z" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.678883 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lszc5\" (UniqueName: \"kubernetes.io/projected/ba1681ef-9c83-419c-bb76-f52cc3e28273-kube-api-access-lszc5\") pod \"manila-operator-controller-manager-78b8f8fd84-64vmw\" (UID: \"ba1681ef-9c83-419c-bb76-f52cc3e28273\") " pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-64vmw" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.678904 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7vrs\" (UniqueName: \"kubernetes.io/projected/d32a4b7b-f918-44bb-86a2-95d862a35727-kube-api-access-t7vrs\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-tdhdr\" (UID: \"d32a4b7b-f918-44bb-86a2-95d862a35727\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-tdhdr" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.678933 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtjv2\" (UniqueName: \"kubernetes.io/projected/63b541a1-cc9f-41ea-8da8-c219f9fff59b-kube-api-access-mtjv2\") pod \"glance-operator-controller-manager-84d5bb46b-624jw\" (UID: \"63b541a1-cc9f-41ea-8da8-c219f9fff59b\") " pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-624jw" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.678953 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp9m9\" (UniqueName: \"kubernetes.io/projected/96867157-752b-449f-b3ee-c0b428e0dbb1-kube-api-access-pp9m9\") pod \"ironic-operator-controller-manager-58865f87b4-ktfq5\" (UID: \"96867157-752b-449f-b3ee-c0b428e0dbb1\") " pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-ktfq5" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.678976 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wzfv\" (UniqueName: \"kubernetes.io/projected/6f5f5515-e498-41a2-8433-5ccec9325ff0-kube-api-access-9wzfv\") pod \"heat-operator-controller-manager-658dd65b86-dnzgt\" (UID: \"6f5f5515-e498-41a2-8433-5ccec9325ff0\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-dnzgt" Jan 27 14:01:52 crc kubenswrapper[4914]: E0127 14:01:52.679425 4914 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:01:52 crc kubenswrapper[4914]: E0127 14:01:52.679469 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert podName:81865888-d857-481d-bcd5-b5e9e17d4b7d nodeName:}" failed. No retries permitted until 2026-01-27 14:01:53.179454051 +0000 UTC m=+1071.491804136 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert") pod "infra-operator-controller-manager-54ccf4f85d-fv4pk" (UID: "81865888-d857-481d-bcd5-b5e9e17d4b7d") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.680999 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-t7gxl" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.682403 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-z4dmz" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.704686 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-2zr2g" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.705765 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bf4858b78-kjdk7"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.708643 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd9mq\" (UniqueName: \"kubernetes.io/projected/81865888-d857-481d-bcd5-b5e9e17d4b7d-kube-api-access-rd9mq\") pod \"infra-operator-controller-manager-54ccf4f85d-fv4pk\" (UID: \"81865888-d857-481d-bcd5-b5e9e17d4b7d\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.721294 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtjv2\" (UniqueName: \"kubernetes.io/projected/63b541a1-cc9f-41ea-8da8-c219f9fff59b-kube-api-access-mtjv2\") pod \"glance-operator-controller-manager-84d5bb46b-624jw\" (UID: \"63b541a1-cc9f-41ea-8da8-c219f9fff59b\") " pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-624jw" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.728924 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-qv5qs"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.729813 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qv5qs" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.731012 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7vrs\" (UniqueName: \"kubernetes.io/projected/d32a4b7b-f918-44bb-86a2-95d862a35727-kube-api-access-t7vrs\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-tdhdr\" (UID: \"d32a4b7b-f918-44bb-86a2-95d862a35727\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-tdhdr" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.738743 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tzfcj" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.749006 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-624jw" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.749662 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wzfv\" (UniqueName: \"kubernetes.io/projected/6f5f5515-e498-41a2-8433-5ccec9325ff0-kube-api-access-9wzfv\") pod \"heat-operator-controller-manager-658dd65b86-dnzgt\" (UID: \"6f5f5515-e498-41a2-8433-5ccec9325ff0\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-dnzgt" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.757816 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.759154 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-dnzgt" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.759617 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.766635 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.766792 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-n2z2p" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.774309 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-7748d79f84-s5pfn"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.775715 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-s5pfn" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.777621 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-mvfxn" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.781501 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-qv5qs"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.781749 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-tdhdr" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.782097 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cshc\" (UniqueName: \"kubernetes.io/projected/ed141eef-7122-41b3-9798-e74d82785c1d-kube-api-access-9cshc\") pod \"keystone-operator-controller-manager-78f8b7b89c-56q9z\" (UID: \"ed141eef-7122-41b3-9798-e74d82785c1d\") " pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-56q9z" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.782137 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lszc5\" (UniqueName: \"kubernetes.io/projected/ba1681ef-9c83-419c-bb76-f52cc3e28273-kube-api-access-lszc5\") pod \"manila-operator-controller-manager-78b8f8fd84-64vmw\" (UID: \"ba1681ef-9c83-419c-bb76-f52cc3e28273\") " pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-64vmw" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.782168 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgnfp\" (UniqueName: \"kubernetes.io/projected/718cf203-74e7-4ab4-9e10-2161163946b6-kube-api-access-rgnfp\") pod \"neutron-operator-controller-manager-569695f6c5-g26rc\" (UID: \"718cf203-74e7-4ab4-9e10-2161163946b6\") " pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-g26rc" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.782204 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96fgw\" (UniqueName: \"kubernetes.io/projected/e843177c-8972-4f13-8b45-0d9d229ee1a0-kube-api-access-96fgw\") pod \"octavia-operator-controller-manager-7bf4858b78-kjdk7\" (UID: \"e843177c-8972-4f13-8b45-0d9d229ee1a0\") " pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-kjdk7" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.782229 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr5p9\" (UniqueName: \"kubernetes.io/projected/f5f15078-52a7-47ed-96dd-c831a33562cc-kube-api-access-kr5p9\") pod \"mariadb-operator-controller-manager-7b88bfc995-5d4x9\" (UID: \"f5f15078-52a7-47ed-96dd-c831a33562cc\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-5d4x9" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.782252 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9htp\" (UniqueName: \"kubernetes.io/projected/514e105e-95e2-424b-b003-eb5967594784-kube-api-access-d9htp\") pod \"nova-operator-controller-manager-74ffd97575-94b6l\" (UID: \"514e105e-95e2-424b-b003-eb5967594784\") " pod="openstack-operators/nova-operator-controller-manager-74ffd97575-94b6l" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.782280 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp9m9\" (UniqueName: \"kubernetes.io/projected/96867157-752b-449f-b3ee-c0b428e0dbb1-kube-api-access-pp9m9\") pod \"ironic-operator-controller-manager-58865f87b4-ktfq5\" (UID: \"96867157-752b-449f-b3ee-c0b428e0dbb1\") " pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-ktfq5" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.803076 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-65596dbf77-9rwcq"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.804529 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp9m9\" (UniqueName: \"kubernetes.io/projected/96867157-752b-449f-b3ee-c0b428e0dbb1-kube-api-access-pp9m9\") pod \"ironic-operator-controller-manager-58865f87b4-ktfq5\" (UID: \"96867157-752b-449f-b3ee-c0b428e0dbb1\") " pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-ktfq5" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.808395 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lszc5\" (UniqueName: \"kubernetes.io/projected/ba1681ef-9c83-419c-bb76-f52cc3e28273-kube-api-access-lszc5\") pod \"manila-operator-controller-manager-78b8f8fd84-64vmw\" (UID: \"ba1681ef-9c83-419c-bb76-f52cc3e28273\") " pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-64vmw" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.809223 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-9rwcq" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.820763 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-rgsnw" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.821287 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cshc\" (UniqueName: \"kubernetes.io/projected/ed141eef-7122-41b3-9798-e74d82785c1d-kube-api-access-9cshc\") pod \"keystone-operator-controller-manager-78f8b7b89c-56q9z\" (UID: \"ed141eef-7122-41b3-9798-e74d82785c1d\") " pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-56q9z" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.824283 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.829993 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-65596dbf77-9rwcq"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.839121 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-sk6rz"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.840207 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-sk6rz" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.842310 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-bhbxd" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.847562 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-ktfq5" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.852585 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-sk6rz"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.867489 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-7748d79f84-s5pfn"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.883647 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8grc8\" (UniqueName: \"kubernetes.io/projected/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-kube-api-access-8grc8\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878\" (UID: \"2fef03f4-218b-4d6b-b9ca-c303c7c7b002\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.883738 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878\" (UID: \"2fef03f4-218b-4d6b-b9ca-c303c7c7b002\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.883768 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd2vm\" (UniqueName: \"kubernetes.io/projected/fc2d6379-e123-4273-8f4a-d36c01030a01-kube-api-access-kd2vm\") pod \"placement-operator-controller-manager-7748d79f84-s5pfn\" (UID: \"fc2d6379-e123-4273-8f4a-d36c01030a01\") " pod="openstack-operators/placement-operator-controller-manager-7748d79f84-s5pfn" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.883811 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45hvw\" (UniqueName: \"kubernetes.io/projected/27a3b302-137f-4c5e-a867-8ad8de53db37-kube-api-access-45hvw\") pod \"ovn-operator-controller-manager-bf6d4f946-qv5qs\" (UID: \"27a3b302-137f-4c5e-a867-8ad8de53db37\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qv5qs" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.883899 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgnfp\" (UniqueName: \"kubernetes.io/projected/718cf203-74e7-4ab4-9e10-2161163946b6-kube-api-access-rgnfp\") pod \"neutron-operator-controller-manager-569695f6c5-g26rc\" (UID: \"718cf203-74e7-4ab4-9e10-2161163946b6\") " pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-g26rc" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.883943 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96fgw\" (UniqueName: \"kubernetes.io/projected/e843177c-8972-4f13-8b45-0d9d229ee1a0-kube-api-access-96fgw\") pod \"octavia-operator-controller-manager-7bf4858b78-kjdk7\" (UID: \"e843177c-8972-4f13-8b45-0d9d229ee1a0\") " pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-kjdk7" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.883968 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr5p9\" (UniqueName: \"kubernetes.io/projected/f5f15078-52a7-47ed-96dd-c831a33562cc-kube-api-access-kr5p9\") pod \"mariadb-operator-controller-manager-7b88bfc995-5d4x9\" (UID: \"f5f15078-52a7-47ed-96dd-c831a33562cc\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-5d4x9" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.883998 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9htp\" (UniqueName: \"kubernetes.io/projected/514e105e-95e2-424b-b003-eb5967594784-kube-api-access-d9htp\") pod \"nova-operator-controller-manager-74ffd97575-94b6l\" (UID: \"514e105e-95e2-424b-b003-eb5967594784\") " pod="openstack-operators/nova-operator-controller-manager-74ffd97575-94b6l" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.911315 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgnfp\" (UniqueName: \"kubernetes.io/projected/718cf203-74e7-4ab4-9e10-2161163946b6-kube-api-access-rgnfp\") pod \"neutron-operator-controller-manager-569695f6c5-g26rc\" (UID: \"718cf203-74e7-4ab4-9e10-2161163946b6\") " pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-g26rc" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.913483 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9htp\" (UniqueName: \"kubernetes.io/projected/514e105e-95e2-424b-b003-eb5967594784-kube-api-access-d9htp\") pod \"nova-operator-controller-manager-74ffd97575-94b6l\" (UID: \"514e105e-95e2-424b-b003-eb5967594784\") " pod="openstack-operators/nova-operator-controller-manager-74ffd97575-94b6l" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.913878 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-56q9z" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.915769 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96fgw\" (UniqueName: \"kubernetes.io/projected/e843177c-8972-4f13-8b45-0d9d229ee1a0-kube-api-access-96fgw\") pod \"octavia-operator-controller-manager-7bf4858b78-kjdk7\" (UID: \"e843177c-8972-4f13-8b45-0d9d229ee1a0\") " pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-kjdk7" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.947939 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr5p9\" (UniqueName: \"kubernetes.io/projected/f5f15078-52a7-47ed-96dd-c831a33562cc-kube-api-access-kr5p9\") pod \"mariadb-operator-controller-manager-7b88bfc995-5d4x9\" (UID: \"f5f15078-52a7-47ed-96dd-c831a33562cc\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-5d4x9" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.950953 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-kflq5"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.969871 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-kflq5"] Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.982946 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-kflq5" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.983175 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-64vmw" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.986096 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ntjv\" (UniqueName: \"kubernetes.io/projected/f478cd9a-acc7-4da7-9c4f-e089f3bdd465-kube-api-access-6ntjv\") pod \"telemetry-operator-controller-manager-7db57dc8bf-sk6rz\" (UID: \"f478cd9a-acc7-4da7-9c4f-e089f3bdd465\") " pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-sk6rz" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.986199 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp2qv\" (UniqueName: \"kubernetes.io/projected/d2699df4-2885-4ccc-ae67-5fddc1d1a385-kube-api-access-mp2qv\") pod \"swift-operator-controller-manager-65596dbf77-9rwcq\" (UID: \"d2699df4-2885-4ccc-ae67-5fddc1d1a385\") " pod="openstack-operators/swift-operator-controller-manager-65596dbf77-9rwcq" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.986252 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8grc8\" (UniqueName: \"kubernetes.io/projected/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-kube-api-access-8grc8\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878\" (UID: \"2fef03f4-218b-4d6b-b9ca-c303c7c7b002\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.986326 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878\" (UID: \"2fef03f4-218b-4d6b-b9ca-c303c7c7b002\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.986344 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd2vm\" (UniqueName: \"kubernetes.io/projected/fc2d6379-e123-4273-8f4a-d36c01030a01-kube-api-access-kd2vm\") pod \"placement-operator-controller-manager-7748d79f84-s5pfn\" (UID: \"fc2d6379-e123-4273-8f4a-d36c01030a01\") " pod="openstack-operators/placement-operator-controller-manager-7748d79f84-s5pfn" Jan 27 14:01:52 crc kubenswrapper[4914]: I0127 14:01:52.986378 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45hvw\" (UniqueName: \"kubernetes.io/projected/27a3b302-137f-4c5e-a867-8ad8de53db37-kube-api-access-45hvw\") pod \"ovn-operator-controller-manager-bf6d4f946-qv5qs\" (UID: \"27a3b302-137f-4c5e-a867-8ad8de53db37\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qv5qs" Jan 27 14:01:52 crc kubenswrapper[4914]: E0127 14:01:52.987169 4914 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:01:52 crc kubenswrapper[4914]: E0127 14:01:52.987249 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert podName:2fef03f4-218b-4d6b-b9ca-c303c7c7b002 nodeName:}" failed. No retries permitted until 2026-01-27 14:01:53.487226523 +0000 UTC m=+1071.799576608 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" (UID: "2fef03f4-218b-4d6b-b9ca-c303c7c7b002") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.019743 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-hj82n" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.020390 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8grc8\" (UniqueName: \"kubernetes.io/projected/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-kube-api-access-8grc8\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878\" (UID: \"2fef03f4-218b-4d6b-b9ca-c303c7c7b002\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.020666 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45hvw\" (UniqueName: \"kubernetes.io/projected/27a3b302-137f-4c5e-a867-8ad8de53db37-kube-api-access-45hvw\") pod \"ovn-operator-controller-manager-bf6d4f946-qv5qs\" (UID: \"27a3b302-137f-4c5e-a867-8ad8de53db37\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qv5qs" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.021233 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd2vm\" (UniqueName: \"kubernetes.io/projected/fc2d6379-e123-4273-8f4a-d36c01030a01-kube-api-access-kd2vm\") pod \"placement-operator-controller-manager-7748d79f84-s5pfn\" (UID: \"fc2d6379-e123-4273-8f4a-d36c01030a01\") " pod="openstack-operators/placement-operator-controller-manager-7748d79f84-s5pfn" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.049040 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6476466c7c-rsnw7"] Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.051222 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-rsnw7" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.056064 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-2gdh4" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.072778 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6476466c7c-rsnw7"] Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.098073 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ntjv\" (UniqueName: \"kubernetes.io/projected/f478cd9a-acc7-4da7-9c4f-e089f3bdd465-kube-api-access-6ntjv\") pod \"telemetry-operator-controller-manager-7db57dc8bf-sk6rz\" (UID: \"f478cd9a-acc7-4da7-9c4f-e089f3bdd465\") " pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-sk6rz" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.098500 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp2qv\" (UniqueName: \"kubernetes.io/projected/d2699df4-2885-4ccc-ae67-5fddc1d1a385-kube-api-access-mp2qv\") pod \"swift-operator-controller-manager-65596dbf77-9rwcq\" (UID: \"d2699df4-2885-4ccc-ae67-5fddc1d1a385\") " pod="openstack-operators/swift-operator-controller-manager-65596dbf77-9rwcq" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.098542 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twm24\" (UniqueName: \"kubernetes.io/projected/01d1f0ed-818c-4a9d-9635-8aa7dea1cfa2-kube-api-access-twm24\") pod \"test-operator-controller-manager-6c866cfdcb-kflq5\" (UID: \"01d1f0ed-818c-4a9d-9635-8aa7dea1cfa2\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-kflq5" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.101879 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-g26rc" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.112995 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh"] Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.114148 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.117229 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lcts5" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.117368 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.117470 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.120359 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-94b6l" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.128347 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ntjv\" (UniqueName: \"kubernetes.io/projected/f478cd9a-acc7-4da7-9c4f-e089f3bdd465-kube-api-access-6ntjv\") pod \"telemetry-operator-controller-manager-7db57dc8bf-sk6rz\" (UID: \"f478cd9a-acc7-4da7-9c4f-e089f3bdd465\") " pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-sk6rz" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.139664 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-5d4x9" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.140712 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp2qv\" (UniqueName: \"kubernetes.io/projected/d2699df4-2885-4ccc-ae67-5fddc1d1a385-kube-api-access-mp2qv\") pod \"swift-operator-controller-manager-65596dbf77-9rwcq\" (UID: \"d2699df4-2885-4ccc-ae67-5fddc1d1a385\") " pod="openstack-operators/swift-operator-controller-manager-65596dbf77-9rwcq" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.161534 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-kjdk7" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.161679 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qv5qs" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.183495 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-s5pfn" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.194582 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh"] Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.201281 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-fv4pk\" (UID: \"81865888-d857-481d-bcd5-b5e9e17d4b7d\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.201366 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jczvp\" (UniqueName: \"kubernetes.io/projected/1c94fefd-3fb7-4730-9386-1499a83c60c6-kube-api-access-jczvp\") pod \"watcher-operator-controller-manager-6476466c7c-rsnw7\" (UID: \"1c94fefd-3fb7-4730-9386-1499a83c60c6\") " pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-rsnw7" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.201395 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.201480 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.201524 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twm24\" (UniqueName: \"kubernetes.io/projected/01d1f0ed-818c-4a9d-9635-8aa7dea1cfa2-kube-api-access-twm24\") pod \"test-operator-controller-manager-6c866cfdcb-kflq5\" (UID: \"01d1f0ed-818c-4a9d-9635-8aa7dea1cfa2\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-kflq5" Jan 27 14:01:53 crc kubenswrapper[4914]: E0127 14:01:53.201534 4914 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.201550 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47bgz\" (UniqueName: \"kubernetes.io/projected/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-kube-api-access-47bgz\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:01:53 crc kubenswrapper[4914]: E0127 14:01:53.201607 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert podName:81865888-d857-481d-bcd5-b5e9e17d4b7d nodeName:}" failed. No retries permitted until 2026-01-27 14:01:54.201585119 +0000 UTC m=+1072.513935274 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert") pod "infra-operator-controller-manager-54ccf4f85d-fv4pk" (UID: "81865888-d857-481d-bcd5-b5e9e17d4b7d") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.208291 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-9rwcq" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.239097 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vljnz"] Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.240109 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vljnz" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.240502 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twm24\" (UniqueName: \"kubernetes.io/projected/01d1f0ed-818c-4a9d-9635-8aa7dea1cfa2-kube-api-access-twm24\") pod \"test-operator-controller-manager-6c866cfdcb-kflq5\" (UID: \"01d1f0ed-818c-4a9d-9635-8aa7dea1cfa2\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-kflq5" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.242779 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-nkg4s" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.246191 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vljnz"] Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.271305 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-sk6rz" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.303873 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jczvp\" (UniqueName: \"kubernetes.io/projected/1c94fefd-3fb7-4730-9386-1499a83c60c6-kube-api-access-jczvp\") pod \"watcher-operator-controller-manager-6476466c7c-rsnw7\" (UID: \"1c94fefd-3fb7-4730-9386-1499a83c60c6\") " pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-rsnw7" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.303937 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.304019 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.304086 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47bgz\" (UniqueName: \"kubernetes.io/projected/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-kube-api-access-47bgz\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:01:53 crc kubenswrapper[4914]: E0127 14:01:53.304224 4914 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:01:53 crc kubenswrapper[4914]: E0127 14:01:53.304291 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs podName:dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf nodeName:}" failed. No retries permitted until 2026-01-27 14:01:53.804271116 +0000 UTC m=+1072.116621201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-nc9zh" (UID: "dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf") : secret "metrics-server-cert" not found Jan 27 14:01:53 crc kubenswrapper[4914]: E0127 14:01:53.304493 4914 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:01:53 crc kubenswrapper[4914]: E0127 14:01:53.304561 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs podName:dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf nodeName:}" failed. No retries permitted until 2026-01-27 14:01:53.804544644 +0000 UTC m=+1072.116894809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-nc9zh" (UID: "dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf") : secret "webhook-server-cert" not found Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.327293 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jczvp\" (UniqueName: \"kubernetes.io/projected/1c94fefd-3fb7-4730-9386-1499a83c60c6-kube-api-access-jczvp\") pod \"watcher-operator-controller-manager-6476466c7c-rsnw7\" (UID: \"1c94fefd-3fb7-4730-9386-1499a83c60c6\") " pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-rsnw7" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.331473 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47bgz\" (UniqueName: \"kubernetes.io/projected/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-kube-api-access-47bgz\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.351028 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-76d4d5b8f9-2zr2g"] Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.377175 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-kflq5" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.388587 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-rsnw7" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.406738 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t55qv\" (UniqueName: \"kubernetes.io/projected/25c3827b-f4ed-432a-b7cb-928c4b315176-kube-api-access-t55qv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vljnz\" (UID: \"25c3827b-f4ed-432a-b7cb-928c4b315176\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vljnz" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.483546 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-58865f87b4-ktfq5"] Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.493369 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84d5bb46b-624jw"] Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.498764 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75b8f798ff-wq8mf"] Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.509230 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t55qv\" (UniqueName: \"kubernetes.io/projected/25c3827b-f4ed-432a-b7cb-928c4b315176-kube-api-access-t55qv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vljnz\" (UID: \"25c3827b-f4ed-432a-b7cb-928c4b315176\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vljnz" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.509318 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878\" (UID: \"2fef03f4-218b-4d6b-b9ca-c303c7c7b002\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" Jan 27 14:01:53 crc kubenswrapper[4914]: E0127 14:01:53.509433 4914 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:01:53 crc kubenswrapper[4914]: E0127 14:01:53.509488 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert podName:2fef03f4-218b-4d6b-b9ca-c303c7c7b002 nodeName:}" failed. No retries permitted until 2026-01-27 14:01:54.509472515 +0000 UTC m=+1072.821822600 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" (UID: "2fef03f4-218b-4d6b-b9ca-c303c7c7b002") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:01:53 crc kubenswrapper[4914]: W0127 14:01:53.581231 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96867157_752b_449f_b3ee_c0b428e0dbb1.slice/crio-5d25e3da3451ded98a21a4e31e8ee7280ef2199c80f6afc8fc7d2799c9b0fb78 WatchSource:0}: Error finding container 5d25e3da3451ded98a21a4e31e8ee7280ef2199c80f6afc8fc7d2799c9b0fb78: Status 404 returned error can't find the container with id 5d25e3da3451ded98a21a4e31e8ee7280ef2199c80f6afc8fc7d2799c9b0fb78 Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.594924 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5fdc687f5-z4dmz"] Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.602239 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t55qv\" (UniqueName: \"kubernetes.io/projected/25c3827b-f4ed-432a-b7cb-928c4b315176-kube-api-access-t55qv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vljnz\" (UID: \"25c3827b-f4ed-432a-b7cb-928c4b315176\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vljnz" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.625990 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-tdhdr"] Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.708500 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-wq8mf" event={"ID":"0093b4bf-5086-4bae-adbb-1e18935cc19a","Type":"ContainerStarted","Data":"228bf38ec4271fc655e3bbf36d593eca3a36039d033b755d3207066df4b8f087"} Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.712763 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-2zr2g" event={"ID":"8bd09249-97f3-4b92-a829-c6f70919052a","Type":"ContainerStarted","Data":"e6a8ebb477eceb986c24a3e3aa3ae5f2abf574c4af64e63e6f6fd6661e36d519"} Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.717023 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-z4dmz" event={"ID":"038a0f9d-802c-4615-bd9f-82f843988bcb","Type":"ContainerStarted","Data":"786cb30dc995da9f8ef179ee0fd6c57493488cf5d58cabe0cd11954c9c022265"} Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.720490 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-624jw" event={"ID":"63b541a1-cc9f-41ea-8da8-c219f9fff59b","Type":"ContainerStarted","Data":"9cafbc0a319bd6b740e6fa41625f8e00b4a9ee51d033639fdcc3616dab25580c"} Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.721783 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-tdhdr" event={"ID":"d32a4b7b-f918-44bb-86a2-95d862a35727","Type":"ContainerStarted","Data":"255f38a416c50c19d3c00bdf37248bc1a27e93c38f9841f69758823341a5c24b"} Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.722725 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-ktfq5" event={"ID":"96867157-752b-449f-b3ee-c0b428e0dbb1","Type":"ContainerStarted","Data":"5d25e3da3451ded98a21a4e31e8ee7280ef2199c80f6afc8fc7d2799c9b0fb78"} Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.829164 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.829275 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:01:53 crc kubenswrapper[4914]: E0127 14:01:53.829397 4914 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:01:53 crc kubenswrapper[4914]: E0127 14:01:53.829409 4914 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:01:53 crc kubenswrapper[4914]: E0127 14:01:53.829501 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs podName:dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf nodeName:}" failed. No retries permitted until 2026-01-27 14:01:54.829462899 +0000 UTC m=+1073.141812974 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-nc9zh" (UID: "dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf") : secret "webhook-server-cert" not found Jan 27 14:01:53 crc kubenswrapper[4914]: E0127 14:01:53.829528 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs podName:dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf nodeName:}" failed. No retries permitted until 2026-01-27 14:01:54.82951157 +0000 UTC m=+1073.141861655 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-nc9zh" (UID: "dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf") : secret "metrics-server-cert" not found Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.876146 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vljnz" Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.903540 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78b8f8fd84-64vmw"] Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.920536 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-dnzgt"] Jan 27 14:01:53 crc kubenswrapper[4914]: I0127 14:01:53.947523 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78f8b7b89c-56q9z"] Jan 27 14:01:53 crc kubenswrapper[4914]: W0127 14:01:53.970765 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded141eef_7122_41b3_9798_e74d82785c1d.slice/crio-5fa3fd70e8cc13e5ddf9f5bf819dab51409819be58be687207abbac1a3fdd945 WatchSource:0}: Error finding container 5fa3fd70e8cc13e5ddf9f5bf819dab51409819be58be687207abbac1a3fdd945: Status 404 returned error can't find the container with id 5fa3fd70e8cc13e5ddf9f5bf819dab51409819be58be687207abbac1a3fdd945 Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.065082 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-569695f6c5-g26rc"] Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.074285 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74ffd97575-94b6l"] Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.079608 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bf4858b78-kjdk7"] Jan 27 14:01:54 crc kubenswrapper[4914]: W0127 14:01:54.080228 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718cf203_74e7_4ab4_9e10_2161163946b6.slice/crio-cc38db2a5fa872bc76d9fd4c62194429e26b8dd2dd68f072f2c7fb5c0cb18142 WatchSource:0}: Error finding container cc38db2a5fa872bc76d9fd4c62194429e26b8dd2dd68f072f2c7fb5c0cb18142: Status 404 returned error can't find the container with id cc38db2a5fa872bc76d9fd4c62194429e26b8dd2dd68f072f2c7fb5c0cb18142 Jan 27 14:01:54 crc kubenswrapper[4914]: W0127 14:01:54.080637 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode843177c_8972_4f13_8b45_0d9d229ee1a0.slice/crio-a82b4540e4ce7dbac707c056aeed3262ab6ef682f53b37622900f23434ea2676 WatchSource:0}: Error finding container a82b4540e4ce7dbac707c056aeed3262ab6ef682f53b37622900f23434ea2676: Status 404 returned error can't find the container with id a82b4540e4ce7dbac707c056aeed3262ab6ef682f53b37622900f23434ea2676 Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.084543 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-qv5qs"] Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.236419 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-fv4pk\" (UID: \"81865888-d857-481d-bcd5-b5e9e17d4b7d\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.236608 4914 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.236669 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert podName:81865888-d857-481d-bcd5-b5e9e17d4b7d nodeName:}" failed. No retries permitted until 2026-01-27 14:01:56.236651899 +0000 UTC m=+1074.549001984 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert") pod "infra-operator-controller-manager-54ccf4f85d-fv4pk" (UID: "81865888-d857-481d-bcd5-b5e9e17d4b7d") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.266817 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-5d4x9"] Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.276753 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6476466c7c-rsnw7"] Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.289770 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-65596dbf77-9rwcq"] Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.312409 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kr5p9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-7b88bfc995-5d4x9_openstack-operators(f5f15078-52a7-47ed-96dd-c831a33562cc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.313611 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-5d4x9" podUID="f5f15078-52a7-47ed-96dd-c831a33562cc" Jan 27 14:01:54 crc kubenswrapper[4914]: W0127 14:01:54.316811 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc2d6379_e123_4273_8f4a_d36c01030a01.slice/crio-a6b58cebbc5194622193ce843d49c9890f71b02a3db1f7b01c9a52e70ada2f28 WatchSource:0}: Error finding container a6b58cebbc5194622193ce843d49c9890f71b02a3db1f7b01c9a52e70ada2f28: Status 404 returned error can't find the container with id a6b58cebbc5194622193ce843d49c9890f71b02a3db1f7b01c9a52e70ada2f28 Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.318367 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-sk6rz"] Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.319033 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/swift-operator@sha256:018ae1352a061ad22a0d4ac5764eb7e19cf5a1d6c2e554f61ae0bd82ebe62e29,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mp2qv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-65596dbf77-9rwcq_openstack-operators(d2699df4-2885-4ccc-ae67-5fddc1d1a385): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.320220 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-9rwcq" podUID="d2699df4-2885-4ccc-ae67-5fddc1d1a385" Jan 27 14:01:54 crc kubenswrapper[4914]: W0127 14:01:54.320468 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c94fefd_3fb7_4730_9386_1499a83c60c6.slice/crio-8aaf8e3306ccdec8b934fd94d5267c93aafb75c693c4fd345521b4dd16f80d10 WatchSource:0}: Error finding container 8aaf8e3306ccdec8b934fd94d5267c93aafb75c693c4fd345521b4dd16f80d10: Status 404 returned error can't find the container with id 8aaf8e3306ccdec8b934fd94d5267c93aafb75c693c4fd345521b4dd16f80d10 Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.323508 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/placement-operator@sha256:a40693d0a2ee7b50ff5b2bd339bc0ce358ccc16309e803e40d8b26e189a2b4c0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kd2vm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-7748d79f84-s5pfn_openstack-operators(fc2d6379-e123-4273-8f4a-d36c01030a01): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.324710 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-s5pfn" podUID="fc2d6379-e123-4273-8f4a-d36c01030a01" Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.326245 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-7748d79f84-s5pfn"] Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.327682 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/watcher-operator@sha256:611e4fb8bf6cd263664ccb437637105fba633ba8f701c228fd525a7a7b3c8d74,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jczvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6476466c7c-rsnw7_openstack-operators(1c94fefd-3fb7-4730-9386-1499a83c60c6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.328846 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-rsnw7" podUID="1c94fefd-3fb7-4730-9386-1499a83c60c6" Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.461260 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-kflq5"] Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.482075 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vljnz"] Jan 27 14:01:54 crc kubenswrapper[4914]: W0127 14:01:54.497445 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01d1f0ed_818c_4a9d_9635_8aa7dea1cfa2.slice/crio-f478fce874d1a29255a0facf018b7ea53038ad6ed74f9aa857090af0614562b0 WatchSource:0}: Error finding container f478fce874d1a29255a0facf018b7ea53038ad6ed74f9aa857090af0614562b0: Status 404 returned error can't find the container with id f478fce874d1a29255a0facf018b7ea53038ad6ed74f9aa857090af0614562b0 Jan 27 14:01:54 crc kubenswrapper[4914]: W0127 14:01:54.503474 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25c3827b_f4ed_432a_b7cb_928c4b315176.slice/crio-a83fdd63b5ec9f91f261d997e4a575d035fa34f0c298d9ba3df5699379a70830 WatchSource:0}: Error finding container a83fdd63b5ec9f91f261d997e4a575d035fa34f0c298d9ba3df5699379a70830: Status 404 returned error can't find the container with id a83fdd63b5ec9f91f261d997e4a575d035fa34f0c298d9ba3df5699379a70830 Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.506721 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t55qv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vljnz_openstack-operators(25c3827b-f4ed-432a-b7cb-928c4b315176): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.508283 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vljnz" podUID="25c3827b-f4ed-432a-b7cb-928c4b315176" Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.540349 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878\" (UID: \"2fef03f4-218b-4d6b-b9ca-c303c7c7b002\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.540589 4914 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.540679 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert podName:2fef03f4-218b-4d6b-b9ca-c303c7c7b002 nodeName:}" failed. No retries permitted until 2026-01-27 14:01:56.540655628 +0000 UTC m=+1074.853005773 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" (UID: "2fef03f4-218b-4d6b-b9ca-c303c7c7b002") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.741262 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-dnzgt" event={"ID":"6f5f5515-e498-41a2-8433-5ccec9325ff0","Type":"ContainerStarted","Data":"965be7c4c059741c5d13be538f32e3859c6da9aa0fb495297d52c9e972734741"} Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.753494 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qv5qs" event={"ID":"27a3b302-137f-4c5e-a867-8ad8de53db37","Type":"ContainerStarted","Data":"d6787ff80800ad23c9c97881b7338c704b5a44feb3137a841a959484b8d49392"} Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.765898 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-56q9z" event={"ID":"ed141eef-7122-41b3-9798-e74d82785c1d","Type":"ContainerStarted","Data":"5fa3fd70e8cc13e5ddf9f5bf819dab51409819be58be687207abbac1a3fdd945"} Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.767443 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-s5pfn" event={"ID":"fc2d6379-e123-4273-8f4a-d36c01030a01","Type":"ContainerStarted","Data":"a6b58cebbc5194622193ce843d49c9890f71b02a3db1f7b01c9a52e70ada2f28"} Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.769353 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/placement-operator@sha256:a40693d0a2ee7b50ff5b2bd339bc0ce358ccc16309e803e40d8b26e189a2b4c0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-s5pfn" podUID="fc2d6379-e123-4273-8f4a-d36c01030a01" Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.783476 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-rsnw7" event={"ID":"1c94fefd-3fb7-4730-9386-1499a83c60c6","Type":"ContainerStarted","Data":"8aaf8e3306ccdec8b934fd94d5267c93aafb75c693c4fd345521b4dd16f80d10"} Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.786435 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/watcher-operator@sha256:611e4fb8bf6cd263664ccb437637105fba633ba8f701c228fd525a7a7b3c8d74\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-rsnw7" podUID="1c94fefd-3fb7-4730-9386-1499a83c60c6" Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.789379 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-sk6rz" event={"ID":"f478cd9a-acc7-4da7-9c4f-e089f3bdd465","Type":"ContainerStarted","Data":"bda6b0acdf93849997d0aa9415b446bea2e8879d41227994fe1d9a2c7e32f379"} Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.792635 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-9rwcq" event={"ID":"d2699df4-2885-4ccc-ae67-5fddc1d1a385","Type":"ContainerStarted","Data":"7700b7dcdd65cacda36fbf488fae5b14b789c494ffdadb8f5f6e70e9c2a8f2c1"} Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.795915 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-g26rc" event={"ID":"718cf203-74e7-4ab4-9e10-2161163946b6","Type":"ContainerStarted","Data":"cc38db2a5fa872bc76d9fd4c62194429e26b8dd2dd68f072f2c7fb5c0cb18142"} Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.799633 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/swift-operator@sha256:018ae1352a061ad22a0d4ac5764eb7e19cf5a1d6c2e554f61ae0bd82ebe62e29\\\"\"" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-9rwcq" podUID="d2699df4-2885-4ccc-ae67-5fddc1d1a385" Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.800108 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-kjdk7" event={"ID":"e843177c-8972-4f13-8b45-0d9d229ee1a0","Type":"ContainerStarted","Data":"a82b4540e4ce7dbac707c056aeed3262ab6ef682f53b37622900f23434ea2676"} Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.801438 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-94b6l" event={"ID":"514e105e-95e2-424b-b003-eb5967594784","Type":"ContainerStarted","Data":"3bd92cedf59cf307544857d6abae936c36fd4333a7d566109cb067e8d4c1b6bf"} Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.802777 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-5d4x9" event={"ID":"f5f15078-52a7-47ed-96dd-c831a33562cc","Type":"ContainerStarted","Data":"b3fd07e29efe79ddd481ad74da12ba71a6c44e510312993a25412bd174324996"} Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.805271 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-64vmw" event={"ID":"ba1681ef-9c83-419c-bb76-f52cc3e28273","Type":"ContainerStarted","Data":"ebe1f7f6ce5392e9f20384e66a1ae0def5c15a5e162865b717a5bc06ea6a902c"} Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.836347 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-kflq5" event={"ID":"01d1f0ed-818c-4a9d-9635-8aa7dea1cfa2","Type":"ContainerStarted","Data":"f478fce874d1a29255a0facf018b7ea53038ad6ed74f9aa857090af0614562b0"} Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.836506 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-5d4x9" podUID="f5f15078-52a7-47ed-96dd-c831a33562cc" Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.838221 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vljnz" event={"ID":"25c3827b-f4ed-432a-b7cb-928c4b315176","Type":"ContainerStarted","Data":"a83fdd63b5ec9f91f261d997e4a575d035fa34f0c298d9ba3df5699379a70830"} Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.847382 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:01:54 crc kubenswrapper[4914]: I0127 14:01:54.847501 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.849042 4914 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.849090 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs podName:dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf nodeName:}" failed. No retries permitted until 2026-01-27 14:01:56.849076658 +0000 UTC m=+1075.161426743 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-nc9zh" (UID: "dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf") : secret "metrics-server-cert" not found Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.849401 4914 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.849433 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs podName:dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf nodeName:}" failed. No retries permitted until 2026-01-27 14:01:56.849420918 +0000 UTC m=+1075.161771003 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-nc9zh" (UID: "dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf") : secret "webhook-server-cert" not found Jan 27 14:01:54 crc kubenswrapper[4914]: E0127 14:01:54.859166 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vljnz" podUID="25c3827b-f4ed-432a-b7cb-928c4b315176" Jan 27 14:01:55 crc kubenswrapper[4914]: E0127 14:01:55.853025 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-5d4x9" podUID="f5f15078-52a7-47ed-96dd-c831a33562cc" Jan 27 14:01:55 crc kubenswrapper[4914]: E0127 14:01:55.854377 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vljnz" podUID="25c3827b-f4ed-432a-b7cb-928c4b315176" Jan 27 14:01:55 crc kubenswrapper[4914]: E0127 14:01:55.854532 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/placement-operator@sha256:a40693d0a2ee7b50ff5b2bd339bc0ce358ccc16309e803e40d8b26e189a2b4c0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-s5pfn" podUID="fc2d6379-e123-4273-8f4a-d36c01030a01" Jan 27 14:01:55 crc kubenswrapper[4914]: E0127 14:01:55.854819 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/swift-operator@sha256:018ae1352a061ad22a0d4ac5764eb7e19cf5a1d6c2e554f61ae0bd82ebe62e29\\\"\"" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-9rwcq" podUID="d2699df4-2885-4ccc-ae67-5fddc1d1a385" Jan 27 14:01:55 crc kubenswrapper[4914]: E0127 14:01:55.858607 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/watcher-operator@sha256:611e4fb8bf6cd263664ccb437637105fba633ba8f701c228fd525a7a7b3c8d74\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-rsnw7" podUID="1c94fefd-3fb7-4730-9386-1499a83c60c6" Jan 27 14:01:56 crc kubenswrapper[4914]: I0127 14:01:56.275169 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-fv4pk\" (UID: \"81865888-d857-481d-bcd5-b5e9e17d4b7d\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" Jan 27 14:01:56 crc kubenswrapper[4914]: E0127 14:01:56.275380 4914 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:01:56 crc kubenswrapper[4914]: E0127 14:01:56.275462 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert podName:81865888-d857-481d-bcd5-b5e9e17d4b7d nodeName:}" failed. No retries permitted until 2026-01-27 14:02:00.275444846 +0000 UTC m=+1078.587794931 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert") pod "infra-operator-controller-manager-54ccf4f85d-fv4pk" (UID: "81865888-d857-481d-bcd5-b5e9e17d4b7d") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:01:56 crc kubenswrapper[4914]: I0127 14:01:56.578590 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878\" (UID: \"2fef03f4-218b-4d6b-b9ca-c303c7c7b002\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" Jan 27 14:01:56 crc kubenswrapper[4914]: E0127 14:01:56.578787 4914 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:01:56 crc kubenswrapper[4914]: E0127 14:01:56.578880 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert podName:2fef03f4-218b-4d6b-b9ca-c303c7c7b002 nodeName:}" failed. No retries permitted until 2026-01-27 14:02:00.578860259 +0000 UTC m=+1078.891210394 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" (UID: "2fef03f4-218b-4d6b-b9ca-c303c7c7b002") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:01:56 crc kubenswrapper[4914]: I0127 14:01:56.883205 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:01:56 crc kubenswrapper[4914]: I0127 14:01:56.883405 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:01:56 crc kubenswrapper[4914]: E0127 14:01:56.883414 4914 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:01:56 crc kubenswrapper[4914]: E0127 14:01:56.883574 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs podName:dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf nodeName:}" failed. No retries permitted until 2026-01-27 14:02:00.883531357 +0000 UTC m=+1079.195881442 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-nc9zh" (UID: "dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf") : secret "webhook-server-cert" not found Jan 27 14:01:56 crc kubenswrapper[4914]: E0127 14:01:56.884064 4914 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:01:56 crc kubenswrapper[4914]: E0127 14:01:56.884099 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs podName:dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf nodeName:}" failed. No retries permitted until 2026-01-27 14:02:00.884088482 +0000 UTC m=+1079.196438617 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-nc9zh" (UID: "dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf") : secret "metrics-server-cert" not found Jan 27 14:02:00 crc kubenswrapper[4914]: I0127 14:02:00.331746 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-fv4pk\" (UID: \"81865888-d857-481d-bcd5-b5e9e17d4b7d\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" Jan 27 14:02:00 crc kubenswrapper[4914]: E0127 14:02:00.331977 4914 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:02:00 crc kubenswrapper[4914]: E0127 14:02:00.332471 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert podName:81865888-d857-481d-bcd5-b5e9e17d4b7d nodeName:}" failed. No retries permitted until 2026-01-27 14:02:08.33244661 +0000 UTC m=+1086.644796695 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert") pod "infra-operator-controller-manager-54ccf4f85d-fv4pk" (UID: "81865888-d857-481d-bcd5-b5e9e17d4b7d") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:02:00 crc kubenswrapper[4914]: I0127 14:02:00.637860 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878\" (UID: \"2fef03f4-218b-4d6b-b9ca-c303c7c7b002\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" Jan 27 14:02:00 crc kubenswrapper[4914]: E0127 14:02:00.638055 4914 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:02:00 crc kubenswrapper[4914]: E0127 14:02:00.638109 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert podName:2fef03f4-218b-4d6b-b9ca-c303c7c7b002 nodeName:}" failed. No retries permitted until 2026-01-27 14:02:08.638091354 +0000 UTC m=+1086.950441439 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" (UID: "2fef03f4-218b-4d6b-b9ca-c303c7c7b002") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:02:00 crc kubenswrapper[4914]: I0127 14:02:00.941472 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:02:00 crc kubenswrapper[4914]: I0127 14:02:00.941599 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:02:00 crc kubenswrapper[4914]: E0127 14:02:00.941749 4914 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:02:00 crc kubenswrapper[4914]: E0127 14:02:00.941809 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs podName:dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf nodeName:}" failed. No retries permitted until 2026-01-27 14:02:08.941791695 +0000 UTC m=+1087.254141780 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-nc9zh" (UID: "dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf") : secret "webhook-server-cert" not found Jan 27 14:02:00 crc kubenswrapper[4914]: E0127 14:02:00.942192 4914 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:02:00 crc kubenswrapper[4914]: E0127 14:02:00.942226 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs podName:dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf nodeName:}" failed. No retries permitted until 2026-01-27 14:02:08.942218948 +0000 UTC m=+1087.254569023 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-nc9zh" (UID: "dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf") : secret "metrics-server-cert" not found Jan 27 14:02:07 crc kubenswrapper[4914]: E0127 14:02:07.103127 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/rh-ee-vfisarov/nova-operator@sha256:9c0272b9043057e7fd740843e11c951ce93d5169298ed91aa8a60a702649f7cf" Jan 27 14:02:07 crc kubenswrapper[4914]: E0127 14:02:07.103894 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/nova-operator@sha256:9c0272b9043057e7fd740843e11c951ce93d5169298ed91aa8a60a702649f7cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d9htp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-74ffd97575-94b6l_openstack-operators(514e105e-95e2-424b-b003-eb5967594784): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:02:07 crc kubenswrapper[4914]: E0127 14:02:07.105144 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-94b6l" podUID="514e105e-95e2-424b-b003-eb5967594784" Jan 27 14:02:07 crc kubenswrapper[4914]: E0127 14:02:07.653147 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/rh-ee-vfisarov/keystone-operator@sha256:3f07fd90b18820601ae78f45a9fbef53bf9e3ed131d5cfa1d424ae0145862dd6" Jan 27 14:02:07 crc kubenswrapper[4914]: E0127 14:02:07.653352 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/keystone-operator@sha256:3f07fd90b18820601ae78f45a9fbef53bf9e3ed131d5cfa1d424ae0145862dd6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cshc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-78f8b7b89c-56q9z_openstack-operators(ed141eef-7122-41b3-9798-e74d82785c1d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:02:07 crc kubenswrapper[4914]: E0127 14:02:07.654537 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-56q9z" podUID="ed141eef-7122-41b3-9798-e74d82785c1d" Jan 27 14:02:07 crc kubenswrapper[4914]: I0127 14:02:07.691389 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:02:07 crc kubenswrapper[4914]: I0127 14:02:07.691470 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:02:07 crc kubenswrapper[4914]: E0127 14:02:07.941206 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/nova-operator@sha256:9c0272b9043057e7fd740843e11c951ce93d5169298ed91aa8a60a702649f7cf\\\"\"" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-94b6l" podUID="514e105e-95e2-424b-b003-eb5967594784" Jan 27 14:02:07 crc kubenswrapper[4914]: E0127 14:02:07.943493 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/keystone-operator@sha256:3f07fd90b18820601ae78f45a9fbef53bf9e3ed131d5cfa1d424ae0145862dd6\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-56q9z" podUID="ed141eef-7122-41b3-9798-e74d82785c1d" Jan 27 14:02:08 crc kubenswrapper[4914]: I0127 14:02:08.356540 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-fv4pk\" (UID: \"81865888-d857-481d-bcd5-b5e9e17d4b7d\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" Jan 27 14:02:08 crc kubenswrapper[4914]: E0127 14:02:08.356730 4914 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:02:08 crc kubenswrapper[4914]: E0127 14:02:08.356838 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert podName:81865888-d857-481d-bcd5-b5e9e17d4b7d nodeName:}" failed. No retries permitted until 2026-01-27 14:02:24.356784146 +0000 UTC m=+1102.669134231 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert") pod "infra-operator-controller-manager-54ccf4f85d-fv4pk" (UID: "81865888-d857-481d-bcd5-b5e9e17d4b7d") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:02:08 crc kubenswrapper[4914]: I0127 14:02:08.660952 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878\" (UID: \"2fef03f4-218b-4d6b-b9ca-c303c7c7b002\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" Jan 27 14:02:08 crc kubenswrapper[4914]: E0127 14:02:08.661139 4914 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:02:08 crc kubenswrapper[4914]: E0127 14:02:08.661194 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert podName:2fef03f4-218b-4d6b-b9ca-c303c7c7b002 nodeName:}" failed. No retries permitted until 2026-01-27 14:02:24.661176546 +0000 UTC m=+1102.973526631 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" (UID: "2fef03f4-218b-4d6b-b9ca-c303c7c7b002") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:02:08 crc kubenswrapper[4914]: I0127 14:02:08.958349 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-kflq5" event={"ID":"01d1f0ed-818c-4a9d-9635-8aa7dea1cfa2","Type":"ContainerStarted","Data":"eeb7417d30a908007371744da1018af99faeeecc9fe4ed1601a5e538920a9476"} Jan 27 14:02:08 crc kubenswrapper[4914]: I0127 14:02:08.958491 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-kflq5" Jan 27 14:02:08 crc kubenswrapper[4914]: I0127 14:02:08.964526 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:02:08 crc kubenswrapper[4914]: I0127 14:02:08.964639 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:02:08 crc kubenswrapper[4914]: E0127 14:02:08.964714 4914 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:02:08 crc kubenswrapper[4914]: E0127 14:02:08.964794 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs podName:dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf nodeName:}" failed. No retries permitted until 2026-01-27 14:02:24.964769814 +0000 UTC m=+1103.277119949 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-nc9zh" (UID: "dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf") : secret "metrics-server-cert" not found Jan 27 14:02:08 crc kubenswrapper[4914]: E0127 14:02:08.964794 4914 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:02:08 crc kubenswrapper[4914]: E0127 14:02:08.964890 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs podName:dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf nodeName:}" failed. No retries permitted until 2026-01-27 14:02:24.964870297 +0000 UTC m=+1103.277220472 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-nc9zh" (UID: "dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf") : secret "webhook-server-cert" not found Jan 27 14:02:08 crc kubenswrapper[4914]: I0127 14:02:08.976775 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qv5qs" event={"ID":"27a3b302-137f-4c5e-a867-8ad8de53db37","Type":"ContainerStarted","Data":"d0190cfca672ab616233625bd4d8994ee34aa24788fd5baadaad1af6b2aea350"} Jan 27 14:02:08 crc kubenswrapper[4914]: I0127 14:02:08.976913 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qv5qs" Jan 27 14:02:08 crc kubenswrapper[4914]: I0127 14:02:08.983312 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-ktfq5" event={"ID":"96867157-752b-449f-b3ee-c0b428e0dbb1","Type":"ContainerStarted","Data":"ffbffec31de6ef7b98d8736993cc7fcfd284bd2f3984d9a640b02944796e1aac"} Jan 27 14:02:08 crc kubenswrapper[4914]: I0127 14:02:08.983457 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-ktfq5" Jan 27 14:02:08 crc kubenswrapper[4914]: I0127 14:02:08.987733 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-64vmw" event={"ID":"ba1681ef-9c83-419c-bb76-f52cc3e28273","Type":"ContainerStarted","Data":"c6910ab08cdd82dbe9b696f1184676d8cca071412d1341993829e413ae512b05"} Jan 27 14:02:08 crc kubenswrapper[4914]: I0127 14:02:08.988423 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-64vmw" Jan 27 14:02:08 crc kubenswrapper[4914]: I0127 14:02:08.997596 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-kflq5" podStartSLOduration=4.42373572 podStartE2EDuration="16.997582585s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:54.505523255 +0000 UTC m=+1072.817873340" lastFinishedPulling="2026-01-27 14:02:07.07937012 +0000 UTC m=+1085.391720205" observedRunningTime="2026-01-27 14:02:08.994201484 +0000 UTC m=+1087.306551569" watchObservedRunningTime="2026-01-27 14:02:08.997582585 +0000 UTC m=+1087.309932670" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.006614 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-sk6rz" event={"ID":"f478cd9a-acc7-4da7-9c4f-e089f3bdd465","Type":"ContainerStarted","Data":"9e13014923280719174f4a5dc8a2f50f2ec121cd7687559805695bb14cfeab45"} Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.007314 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-sk6rz" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.011003 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-wq8mf" event={"ID":"0093b4bf-5086-4bae-adbb-1e18935cc19a","Type":"ContainerStarted","Data":"19002c5e1a85826e2b4924a1222d6d1b54a163095d2933c6ac7552675fde56d7"} Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.011462 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-wq8mf" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.016474 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-z4dmz" event={"ID":"038a0f9d-802c-4615-bd9f-82f843988bcb","Type":"ContainerStarted","Data":"8eee6cbb97b3762998b5de76ea147e06ddc4d08bc5979aa147174d74831bc827"} Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.016549 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-z4dmz" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.021980 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-g26rc" event={"ID":"718cf203-74e7-4ab4-9e10-2161163946b6","Type":"ContainerStarted","Data":"c04cb19f9437f41a9ffafbfe1010677c9b0dbdfa2966c5979429fe2d6a800b0d"} Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.022630 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-g26rc" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.029536 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-kjdk7" event={"ID":"e843177c-8972-4f13-8b45-0d9d229ee1a0","Type":"ContainerStarted","Data":"3c1b41016b83bbfc17621031e8409b8e7780bdf817cf58b9fd9addcc1a4ce690"} Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.030145 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-kjdk7" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.031806 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-dnzgt" event={"ID":"6f5f5515-e498-41a2-8433-5ccec9325ff0","Type":"ContainerStarted","Data":"a526916d838c761db3b7062964b1e45e8712fe96d635d25d452d10822afc8067"} Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.032562 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-dnzgt" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.042215 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-tdhdr" event={"ID":"d32a4b7b-f918-44bb-86a2-95d862a35727","Type":"ContainerStarted","Data":"2133aa3b00b5630dfde2992b425367ccff1f56c855763a4df59d99f116f7549f"} Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.043043 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-tdhdr" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.050865 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qv5qs" podStartSLOduration=4.025874483 podStartE2EDuration="17.05081138s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:54.076214985 +0000 UTC m=+1072.388565070" lastFinishedPulling="2026-01-27 14:02:07.101151882 +0000 UTC m=+1085.413501967" observedRunningTime="2026-01-27 14:02:09.016706264 +0000 UTC m=+1087.329056349" watchObservedRunningTime="2026-01-27 14:02:09.05081138 +0000 UTC m=+1087.363161465" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.055069 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-64vmw" podStartSLOduration=3.357295009 podStartE2EDuration="17.055049695s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:53.925982228 +0000 UTC m=+1072.238332313" lastFinishedPulling="2026-01-27 14:02:07.623736914 +0000 UTC m=+1085.936086999" observedRunningTime="2026-01-27 14:02:09.043430619 +0000 UTC m=+1087.355780724" watchObservedRunningTime="2026-01-27 14:02:09.055049695 +0000 UTC m=+1087.367399780" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.059941 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-624jw" event={"ID":"63b541a1-cc9f-41ea-8da8-c219f9fff59b","Type":"ContainerStarted","Data":"7c25513f338b5d605c35cead6303df65162c889111233b3fbc4a3c8059a89d88"} Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.060758 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-624jw" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.069700 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-2zr2g" event={"ID":"8bd09249-97f3-4b92-a829-c6f70919052a","Type":"ContainerStarted","Data":"1c08d2030d7d37866767dacb5971c140c1a859bc7cb9359ce77eb6efb7d1fccc"} Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.070355 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-2zr2g" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.070596 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-ktfq5" podStartSLOduration=3.045587519 podStartE2EDuration="17.070585416s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:53.588102448 +0000 UTC m=+1071.900452533" lastFinishedPulling="2026-01-27 14:02:07.613100345 +0000 UTC m=+1085.925450430" observedRunningTime="2026-01-27 14:02:09.068878901 +0000 UTC m=+1087.381228986" watchObservedRunningTime="2026-01-27 14:02:09.070585416 +0000 UTC m=+1087.382935501" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.096732 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-z4dmz" podStartSLOduration=3.099580605 podStartE2EDuration="17.096714565s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:53.626571063 +0000 UTC m=+1071.938921148" lastFinishedPulling="2026-01-27 14:02:07.623705023 +0000 UTC m=+1085.936055108" observedRunningTime="2026-01-27 14:02:09.087222318 +0000 UTC m=+1087.399572403" watchObservedRunningTime="2026-01-27 14:02:09.096714565 +0000 UTC m=+1087.409064640" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.125419 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-g26rc" podStartSLOduration=3.577812904 podStartE2EDuration="17.125401454s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:54.081819037 +0000 UTC m=+1072.394169122" lastFinishedPulling="2026-01-27 14:02:07.629407587 +0000 UTC m=+1085.941757672" observedRunningTime="2026-01-27 14:02:09.123739549 +0000 UTC m=+1087.436089624" watchObservedRunningTime="2026-01-27 14:02:09.125401454 +0000 UTC m=+1087.437751539" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.189456 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-dnzgt" podStartSLOduration=3.445609597 podStartE2EDuration="17.189421752s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:53.940511682 +0000 UTC m=+1072.252861767" lastFinishedPulling="2026-01-27 14:02:07.684323837 +0000 UTC m=+1085.996673922" observedRunningTime="2026-01-27 14:02:09.175400791 +0000 UTC m=+1087.487750876" watchObservedRunningTime="2026-01-27 14:02:09.189421752 +0000 UTC m=+1087.501771837" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.204255 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-sk6rz" podStartSLOduration=3.826964525 podStartE2EDuration="17.204227403s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:54.306891575 +0000 UTC m=+1072.619241670" lastFinishedPulling="2026-01-27 14:02:07.684154463 +0000 UTC m=+1085.996504548" observedRunningTime="2026-01-27 14:02:09.201444698 +0000 UTC m=+1087.513794793" watchObservedRunningTime="2026-01-27 14:02:09.204227403 +0000 UTC m=+1087.516577498" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.222136 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-624jw" podStartSLOduration=3.151034912 podStartE2EDuration="17.222112818s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:53.604181575 +0000 UTC m=+1071.916531660" lastFinishedPulling="2026-01-27 14:02:07.675259481 +0000 UTC m=+1085.987609566" observedRunningTime="2026-01-27 14:02:09.219442856 +0000 UTC m=+1087.531792961" watchObservedRunningTime="2026-01-27 14:02:09.222112818 +0000 UTC m=+1087.534462903" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.251425 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-tdhdr" podStartSLOduration=3.280314149 podStartE2EDuration="17.251396423s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:53.65818372 +0000 UTC m=+1071.970533805" lastFinishedPulling="2026-01-27 14:02:07.629265994 +0000 UTC m=+1085.941616079" observedRunningTime="2026-01-27 14:02:09.248168026 +0000 UTC m=+1087.560518121" watchObservedRunningTime="2026-01-27 14:02:09.251396423 +0000 UTC m=+1087.563746518" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.280976 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-wq8mf" podStartSLOduration=3.7682728020000003 podStartE2EDuration="17.280957835s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:53.588436768 +0000 UTC m=+1071.900786853" lastFinishedPulling="2026-01-27 14:02:07.101121791 +0000 UTC m=+1085.413471886" observedRunningTime="2026-01-27 14:02:09.280532014 +0000 UTC m=+1087.592882109" watchObservedRunningTime="2026-01-27 14:02:09.280957835 +0000 UTC m=+1087.593307920" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.306267 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-kjdk7" podStartSLOduration=3.742366918 podStartE2EDuration="17.306239151s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:54.088288362 +0000 UTC m=+1072.400638437" lastFinishedPulling="2026-01-27 14:02:07.652160585 +0000 UTC m=+1085.964510670" observedRunningTime="2026-01-27 14:02:09.30362572 +0000 UTC m=+1087.615975825" watchObservedRunningTime="2026-01-27 14:02:09.306239151 +0000 UTC m=+1087.618589236" Jan 27 14:02:09 crc kubenswrapper[4914]: I0127 14:02:09.325485 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-2zr2g" podStartSLOduration=3.6273500690000002 podStartE2EDuration="17.325459543s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:53.381255846 +0000 UTC m=+1071.693605931" lastFinishedPulling="2026-01-27 14:02:07.07936532 +0000 UTC m=+1085.391715405" observedRunningTime="2026-01-27 14:02:09.325066862 +0000 UTC m=+1087.637416957" watchObservedRunningTime="2026-01-27 14:02:09.325459543 +0000 UTC m=+1087.637809638" Jan 27 14:02:12 crc kubenswrapper[4914]: I0127 14:02:12.686323 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-z4dmz" Jan 27 14:02:12 crc kubenswrapper[4914]: I0127 14:02:12.709056 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-2zr2g" Jan 27 14:02:13 crc kubenswrapper[4914]: I0127 14:02:13.105184 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-g26rc" Jan 27 14:02:13 crc kubenswrapper[4914]: I0127 14:02:13.165212 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-kjdk7" Jan 27 14:02:13 crc kubenswrapper[4914]: I0127 14:02:13.170799 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qv5qs" Jan 27 14:02:13 crc kubenswrapper[4914]: I0127 14:02:13.274102 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-sk6rz" Jan 27 14:02:13 crc kubenswrapper[4914]: I0127 14:02:13.380845 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-kflq5" Jan 27 14:02:19 crc kubenswrapper[4914]: I0127 14:02:19.137511 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-s5pfn" event={"ID":"fc2d6379-e123-4273-8f4a-d36c01030a01","Type":"ContainerStarted","Data":"09cf62b19d9ffe73d9d4896e6a8231df092ca4c13264320bc3bac9104b56709a"} Jan 27 14:02:19 crc kubenswrapper[4914]: I0127 14:02:19.138048 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-s5pfn" Jan 27 14:02:19 crc kubenswrapper[4914]: I0127 14:02:19.139855 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-rsnw7" event={"ID":"1c94fefd-3fb7-4730-9386-1499a83c60c6","Type":"ContainerStarted","Data":"6fa5e409e6975adbbe9ce933e65863aaa3f42fa126dfe4a01ce56e9dd96626c7"} Jan 27 14:02:19 crc kubenswrapper[4914]: I0127 14:02:19.140041 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-rsnw7" Jan 27 14:02:19 crc kubenswrapper[4914]: I0127 14:02:19.141901 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-9rwcq" event={"ID":"d2699df4-2885-4ccc-ae67-5fddc1d1a385","Type":"ContainerStarted","Data":"32eb499ecd98ff9dc7d1d25ca07b8b61f35fbc38debde850e35f46e6c681869b"} Jan 27 14:02:19 crc kubenswrapper[4914]: I0127 14:02:19.142074 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-9rwcq" Jan 27 14:02:19 crc kubenswrapper[4914]: I0127 14:02:19.143512 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-5d4x9" event={"ID":"f5f15078-52a7-47ed-96dd-c831a33562cc","Type":"ContainerStarted","Data":"98dd00b5c1911f69c35137df9478d8544451db821da57abf446692fa722a4106"} Jan 27 14:02:19 crc kubenswrapper[4914]: I0127 14:02:19.143692 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-5d4x9" Jan 27 14:02:19 crc kubenswrapper[4914]: I0127 14:02:19.145166 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vljnz" event={"ID":"25c3827b-f4ed-432a-b7cb-928c4b315176","Type":"ContainerStarted","Data":"7e234f22b47dfcb742c936806434e682bc29691e1e846b5740af3f21ae2b47e6"} Jan 27 14:02:19 crc kubenswrapper[4914]: I0127 14:02:19.155654 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-s5pfn" podStartSLOduration=2.751719575 podStartE2EDuration="27.155627893s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:54.323386992 +0000 UTC m=+1072.635737067" lastFinishedPulling="2026-01-27 14:02:18.7272953 +0000 UTC m=+1097.039645385" observedRunningTime="2026-01-27 14:02:19.153617269 +0000 UTC m=+1097.465967344" watchObservedRunningTime="2026-01-27 14:02:19.155627893 +0000 UTC m=+1097.467977988" Jan 27 14:02:19 crc kubenswrapper[4914]: I0127 14:02:19.173058 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vljnz" podStartSLOduration=1.934387292 podStartE2EDuration="26.173043596s" podCreationTimestamp="2026-01-27 14:01:53 +0000 UTC" firstStartedPulling="2026-01-27 14:01:54.506563453 +0000 UTC m=+1072.818913538" lastFinishedPulling="2026-01-27 14:02:18.745219757 +0000 UTC m=+1097.057569842" observedRunningTime="2026-01-27 14:02:19.169464119 +0000 UTC m=+1097.481814214" watchObservedRunningTime="2026-01-27 14:02:19.173043596 +0000 UTC m=+1097.485393681" Jan 27 14:02:19 crc kubenswrapper[4914]: I0127 14:02:19.195252 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-9rwcq" podStartSLOduration=2.733434169 podStartE2EDuration="27.195233358s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:54.318747976 +0000 UTC m=+1072.631098061" lastFinishedPulling="2026-01-27 14:02:18.780547165 +0000 UTC m=+1097.092897250" observedRunningTime="2026-01-27 14:02:19.189965375 +0000 UTC m=+1097.502315480" watchObservedRunningTime="2026-01-27 14:02:19.195233358 +0000 UTC m=+1097.507583433" Jan 27 14:02:19 crc kubenswrapper[4914]: I0127 14:02:19.205665 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-rsnw7" podStartSLOduration=2.566360815 podStartE2EDuration="27.205647331s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:54.327517884 +0000 UTC m=+1072.639867969" lastFinishedPulling="2026-01-27 14:02:18.9668044 +0000 UTC m=+1097.279154485" observedRunningTime="2026-01-27 14:02:19.20449893 +0000 UTC m=+1097.516849015" watchObservedRunningTime="2026-01-27 14:02:19.205647331 +0000 UTC m=+1097.517997416" Jan 27 14:02:19 crc kubenswrapper[4914]: I0127 14:02:19.222617 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-5d4x9" podStartSLOduration=2.788014911 podStartE2EDuration="27.222598781s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:54.312223759 +0000 UTC m=+1072.624573844" lastFinishedPulling="2026-01-27 14:02:18.746807629 +0000 UTC m=+1097.059157714" observedRunningTime="2026-01-27 14:02:19.218292624 +0000 UTC m=+1097.530642709" watchObservedRunningTime="2026-01-27 14:02:19.222598781 +0000 UTC m=+1097.534948886" Jan 27 14:02:22 crc kubenswrapper[4914]: I0127 14:02:22.666928 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-wq8mf" Jan 27 14:02:22 crc kubenswrapper[4914]: I0127 14:02:22.754176 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-624jw" Jan 27 14:02:22 crc kubenswrapper[4914]: I0127 14:02:22.762137 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-dnzgt" Jan 27 14:02:22 crc kubenswrapper[4914]: I0127 14:02:22.784558 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-tdhdr" Jan 27 14:02:22 crc kubenswrapper[4914]: I0127 14:02:22.851327 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-ktfq5" Jan 27 14:02:22 crc kubenswrapper[4914]: I0127 14:02:22.987621 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-64vmw" Jan 27 14:02:23 crc kubenswrapper[4914]: I0127 14:02:23.143226 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-5d4x9" Jan 27 14:02:23 crc kubenswrapper[4914]: I0127 14:02:23.169624 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-94b6l" event={"ID":"514e105e-95e2-424b-b003-eb5967594784","Type":"ContainerStarted","Data":"3c8de433e84786a4f99743d1c04eb19a125ae0b48da48eca80307de5eeee24a8"} Jan 27 14:02:23 crc kubenswrapper[4914]: I0127 14:02:23.169879 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-94b6l" Jan 27 14:02:23 crc kubenswrapper[4914]: I0127 14:02:23.200705 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-94b6l" podStartSLOduration=2.3829585079999998 podStartE2EDuration="31.200687204s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:54.073914392 +0000 UTC m=+1072.386264477" lastFinishedPulling="2026-01-27 14:02:22.891643088 +0000 UTC m=+1101.203993173" observedRunningTime="2026-01-27 14:02:23.197025195 +0000 UTC m=+1101.509375280" watchObservedRunningTime="2026-01-27 14:02:23.200687204 +0000 UTC m=+1101.513037289" Jan 27 14:02:24 crc kubenswrapper[4914]: I0127 14:02:24.177060 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-56q9z" event={"ID":"ed141eef-7122-41b3-9798-e74d82785c1d","Type":"ContainerStarted","Data":"65d257cf16c54719a7707f8f2d83cbef7a238f09f2746ecc30e205b83adbaea3"} Jan 27 14:02:24 crc kubenswrapper[4914]: I0127 14:02:24.177713 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-56q9z" Jan 27 14:02:24 crc kubenswrapper[4914]: I0127 14:02:24.442772 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-fv4pk\" (UID: \"81865888-d857-481d-bcd5-b5e9e17d4b7d\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" Jan 27 14:02:24 crc kubenswrapper[4914]: I0127 14:02:24.455158 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81865888-d857-481d-bcd5-b5e9e17d4b7d-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-fv4pk\" (UID: \"81865888-d857-481d-bcd5-b5e9e17d4b7d\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" Jan 27 14:02:24 crc kubenswrapper[4914]: I0127 14:02:24.610702 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-6xwq5" Jan 27 14:02:24 crc kubenswrapper[4914]: I0127 14:02:24.619060 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" Jan 27 14:02:24 crc kubenswrapper[4914]: I0127 14:02:24.746686 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878\" (UID: \"2fef03f4-218b-4d6b-b9ca-c303c7c7b002\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" Jan 27 14:02:24 crc kubenswrapper[4914]: I0127 14:02:24.751477 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fef03f4-218b-4d6b-b9ca-c303c7c7b002-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878\" (UID: \"2fef03f4-218b-4d6b-b9ca-c303c7c7b002\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" Jan 27 14:02:24 crc kubenswrapper[4914]: I0127 14:02:24.973409 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-n2z2p" Jan 27 14:02:24 crc kubenswrapper[4914]: I0127 14:02:24.981840 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" Jan 27 14:02:25 crc kubenswrapper[4914]: I0127 14:02:25.051290 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:02:25 crc kubenswrapper[4914]: I0127 14:02:25.051663 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:02:25 crc kubenswrapper[4914]: I0127 14:02:25.058494 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:02:25 crc kubenswrapper[4914]: I0127 14:02:25.059068 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-nc9zh\" (UID: \"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:02:25 crc kubenswrapper[4914]: I0127 14:02:25.097752 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-56q9z" podStartSLOduration=3.128601594 podStartE2EDuration="33.097727714s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:01:53.97357639 +0000 UTC m=+1072.285926475" lastFinishedPulling="2026-01-27 14:02:23.94270251 +0000 UTC m=+1102.255052595" observedRunningTime="2026-01-27 14:02:24.206618482 +0000 UTC m=+1102.518968567" watchObservedRunningTime="2026-01-27 14:02:25.097727714 +0000 UTC m=+1103.410077809" Jan 27 14:02:25 crc kubenswrapper[4914]: I0127 14:02:25.102551 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk"] Jan 27 14:02:25 crc kubenswrapper[4914]: I0127 14:02:25.191294 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" event={"ID":"81865888-d857-481d-bcd5-b5e9e17d4b7d","Type":"ContainerStarted","Data":"313734b3deca3c8aa40b30bcb4ce1d5ff51edf70bd5dcdfdaf0d1c1746c3daae"} Jan 27 14:02:25 crc kubenswrapper[4914]: W0127 14:02:25.222028 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fef03f4_218b_4d6b_b9ca_c303c7c7b002.slice/crio-82d69099928a8dc5a5116a3a436563895d23bd358677c338b422253e6d2ed114 WatchSource:0}: Error finding container 82d69099928a8dc5a5116a3a436563895d23bd358677c338b422253e6d2ed114: Status 404 returned error can't find the container with id 82d69099928a8dc5a5116a3a436563895d23bd358677c338b422253e6d2ed114 Jan 27 14:02:25 crc kubenswrapper[4914]: I0127 14:02:25.236995 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878"] Jan 27 14:02:25 crc kubenswrapper[4914]: I0127 14:02:25.264735 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lcts5" Jan 27 14:02:25 crc kubenswrapper[4914]: I0127 14:02:25.273183 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:02:25 crc kubenswrapper[4914]: I0127 14:02:25.693783 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh"] Jan 27 14:02:25 crc kubenswrapper[4914]: W0127 14:02:25.711358 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddda335ba_11df_4fb7_86ba_4b2a0c8dbdaf.slice/crio-f2dada856a174191f315ec7fd60ab5223cb012f6aa2afc749f1b23d42b7148ca WatchSource:0}: Error finding container f2dada856a174191f315ec7fd60ab5223cb012f6aa2afc749f1b23d42b7148ca: Status 404 returned error can't find the container with id f2dada856a174191f315ec7fd60ab5223cb012f6aa2afc749f1b23d42b7148ca Jan 27 14:02:26 crc kubenswrapper[4914]: I0127 14:02:26.200898 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" event={"ID":"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf","Type":"ContainerStarted","Data":"5ead6a904c5f38800cf3a2ef4fc71181c45f5ef83a04c6f2742648186ff91507"} Jan 27 14:02:26 crc kubenswrapper[4914]: I0127 14:02:26.201318 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:02:26 crc kubenswrapper[4914]: I0127 14:02:26.201342 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" event={"ID":"dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf","Type":"ContainerStarted","Data":"f2dada856a174191f315ec7fd60ab5223cb012f6aa2afc749f1b23d42b7148ca"} Jan 27 14:02:26 crc kubenswrapper[4914]: I0127 14:02:26.202561 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" event={"ID":"2fef03f4-218b-4d6b-b9ca-c303c7c7b002","Type":"ContainerStarted","Data":"82d69099928a8dc5a5116a3a436563895d23bd358677c338b422253e6d2ed114"} Jan 27 14:02:26 crc kubenswrapper[4914]: I0127 14:02:26.236886 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" podStartSLOduration=33.236861829 podStartE2EDuration="33.236861829s" podCreationTimestamp="2026-01-27 14:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:02:26.232763747 +0000 UTC m=+1104.545113832" watchObservedRunningTime="2026-01-27 14:02:26.236861829 +0000 UTC m=+1104.549211914" Jan 27 14:02:28 crc kubenswrapper[4914]: I0127 14:02:28.220113 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" event={"ID":"2fef03f4-218b-4d6b-b9ca-c303c7c7b002","Type":"ContainerStarted","Data":"392aee9235267571698c9ac74c8baa63d733d9c18c317d7d67a8078727f458c8"} Jan 27 14:02:28 crc kubenswrapper[4914]: I0127 14:02:28.220456 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" Jan 27 14:02:28 crc kubenswrapper[4914]: I0127 14:02:28.221522 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" event={"ID":"81865888-d857-481d-bcd5-b5e9e17d4b7d","Type":"ContainerStarted","Data":"2036f9e8a0c789aa95f0ec1818aac264e38ec02d42e16826fba668506ff7aa93"} Jan 27 14:02:28 crc kubenswrapper[4914]: I0127 14:02:28.221669 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" Jan 27 14:02:28 crc kubenswrapper[4914]: I0127 14:02:28.253009 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" podStartSLOduration=33.876955276 podStartE2EDuration="36.252987262s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:02:25.224590417 +0000 UTC m=+1103.536940502" lastFinishedPulling="2026-01-27 14:02:27.600622393 +0000 UTC m=+1105.912972488" observedRunningTime="2026-01-27 14:02:28.246166305 +0000 UTC m=+1106.558516390" watchObservedRunningTime="2026-01-27 14:02:28.252987262 +0000 UTC m=+1106.565337347" Jan 27 14:02:28 crc kubenswrapper[4914]: I0127 14:02:28.266353 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" podStartSLOduration=33.795711472 podStartE2EDuration="36.266334966s" podCreationTimestamp="2026-01-27 14:01:52 +0000 UTC" firstStartedPulling="2026-01-27 14:02:25.136711442 +0000 UTC m=+1103.449061527" lastFinishedPulling="2026-01-27 14:02:27.607334936 +0000 UTC m=+1105.919685021" observedRunningTime="2026-01-27 14:02:28.261909026 +0000 UTC m=+1106.574259131" watchObservedRunningTime="2026-01-27 14:02:28.266334966 +0000 UTC m=+1106.578685051" Jan 27 14:02:32 crc kubenswrapper[4914]: I0127 14:02:32.917384 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-56q9z" Jan 27 14:02:33 crc kubenswrapper[4914]: I0127 14:02:33.123230 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-94b6l" Jan 27 14:02:33 crc kubenswrapper[4914]: I0127 14:02:33.186058 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-s5pfn" Jan 27 14:02:33 crc kubenswrapper[4914]: I0127 14:02:33.211212 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-9rwcq" Jan 27 14:02:33 crc kubenswrapper[4914]: I0127 14:02:33.392806 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-rsnw7" Jan 27 14:02:34 crc kubenswrapper[4914]: I0127 14:02:34.625306 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-fv4pk" Jan 27 14:02:34 crc kubenswrapper[4914]: I0127 14:02:34.989033 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878" Jan 27 14:02:35 crc kubenswrapper[4914]: I0127 14:02:35.279201 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-nc9zh" Jan 27 14:02:37 crc kubenswrapper[4914]: I0127 14:02:37.690926 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:02:37 crc kubenswrapper[4914]: I0127 14:02:37.690986 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.104697 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-5srxf"] Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.115623 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-5srxf"] Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.115746 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-5srxf" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.118198 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.118232 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.118455 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-dnt68" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.118608 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.158026 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j9sl\" (UniqueName: \"kubernetes.io/projected/98ea0694-2ad6-4e8f-ad31-69f3506e0d90-kube-api-access-5j9sl\") pod \"dnsmasq-dns-84bb9d8bd9-5srxf\" (UID: \"98ea0694-2ad6-4e8f-ad31-69f3506e0d90\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-5srxf" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.158105 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ea0694-2ad6-4e8f-ad31-69f3506e0d90-config\") pod \"dnsmasq-dns-84bb9d8bd9-5srxf\" (UID: \"98ea0694-2ad6-4e8f-ad31-69f3506e0d90\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-5srxf" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.172898 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-vxpjb"] Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.174217 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-vxpjb" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.176335 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.183696 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-vxpjb"] Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.259436 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-dns-svc\") pod \"dnsmasq-dns-5f854695bc-vxpjb\" (UID: \"9cce5d04-a158-45e3-9512-2c97d2d9d4ff\") " pod="openstack/dnsmasq-dns-5f854695bc-vxpjb" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.259482 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-config\") pod \"dnsmasq-dns-5f854695bc-vxpjb\" (UID: \"9cce5d04-a158-45e3-9512-2c97d2d9d4ff\") " pod="openstack/dnsmasq-dns-5f854695bc-vxpjb" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.259531 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j9sl\" (UniqueName: \"kubernetes.io/projected/98ea0694-2ad6-4e8f-ad31-69f3506e0d90-kube-api-access-5j9sl\") pod \"dnsmasq-dns-84bb9d8bd9-5srxf\" (UID: \"98ea0694-2ad6-4e8f-ad31-69f3506e0d90\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-5srxf" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.259565 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz9nf\" (UniqueName: \"kubernetes.io/projected/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-kube-api-access-hz9nf\") pod \"dnsmasq-dns-5f854695bc-vxpjb\" (UID: \"9cce5d04-a158-45e3-9512-2c97d2d9d4ff\") " pod="openstack/dnsmasq-dns-5f854695bc-vxpjb" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.259603 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ea0694-2ad6-4e8f-ad31-69f3506e0d90-config\") pod \"dnsmasq-dns-84bb9d8bd9-5srxf\" (UID: \"98ea0694-2ad6-4e8f-ad31-69f3506e0d90\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-5srxf" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.260518 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ea0694-2ad6-4e8f-ad31-69f3506e0d90-config\") pod \"dnsmasq-dns-84bb9d8bd9-5srxf\" (UID: \"98ea0694-2ad6-4e8f-ad31-69f3506e0d90\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-5srxf" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.277888 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j9sl\" (UniqueName: \"kubernetes.io/projected/98ea0694-2ad6-4e8f-ad31-69f3506e0d90-kube-api-access-5j9sl\") pod \"dnsmasq-dns-84bb9d8bd9-5srxf\" (UID: \"98ea0694-2ad6-4e8f-ad31-69f3506e0d90\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-5srxf" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.361373 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-config\") pod \"dnsmasq-dns-5f854695bc-vxpjb\" (UID: \"9cce5d04-a158-45e3-9512-2c97d2d9d4ff\") " pod="openstack/dnsmasq-dns-5f854695bc-vxpjb" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.361546 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz9nf\" (UniqueName: \"kubernetes.io/projected/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-kube-api-access-hz9nf\") pod \"dnsmasq-dns-5f854695bc-vxpjb\" (UID: \"9cce5d04-a158-45e3-9512-2c97d2d9d4ff\") " pod="openstack/dnsmasq-dns-5f854695bc-vxpjb" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.361716 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-dns-svc\") pod \"dnsmasq-dns-5f854695bc-vxpjb\" (UID: \"9cce5d04-a158-45e3-9512-2c97d2d9d4ff\") " pod="openstack/dnsmasq-dns-5f854695bc-vxpjb" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.362587 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-config\") pod \"dnsmasq-dns-5f854695bc-vxpjb\" (UID: \"9cce5d04-a158-45e3-9512-2c97d2d9d4ff\") " pod="openstack/dnsmasq-dns-5f854695bc-vxpjb" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.363868 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-dns-svc\") pod \"dnsmasq-dns-5f854695bc-vxpjb\" (UID: \"9cce5d04-a158-45e3-9512-2c97d2d9d4ff\") " pod="openstack/dnsmasq-dns-5f854695bc-vxpjb" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.377579 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz9nf\" (UniqueName: \"kubernetes.io/projected/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-kube-api-access-hz9nf\") pod \"dnsmasq-dns-5f854695bc-vxpjb\" (UID: \"9cce5d04-a158-45e3-9512-2c97d2d9d4ff\") " pod="openstack/dnsmasq-dns-5f854695bc-vxpjb" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.435809 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-5srxf" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.491543 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-vxpjb" Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.882486 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-5srxf"] Jan 27 14:02:53 crc kubenswrapper[4914]: I0127 14:02:53.958505 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-vxpjb"] Jan 27 14:02:53 crc kubenswrapper[4914]: W0127 14:02:53.961191 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cce5d04_a158_45e3_9512_2c97d2d9d4ff.slice/crio-9baed2381725e71b75f537f507bf63c5038a41a179dd11ad2d6b6969ed75d429 WatchSource:0}: Error finding container 9baed2381725e71b75f537f507bf63c5038a41a179dd11ad2d6b6969ed75d429: Status 404 returned error can't find the container with id 9baed2381725e71b75f537f507bf63c5038a41a179dd11ad2d6b6969ed75d429 Jan 27 14:02:54 crc kubenswrapper[4914]: I0127 14:02:54.407352 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-5srxf" event={"ID":"98ea0694-2ad6-4e8f-ad31-69f3506e0d90","Type":"ContainerStarted","Data":"9da1b703298b65697e176b91c8303c65d38b42f0fba48219d9ce6ddce7adeec3"} Jan 27 14:02:54 crc kubenswrapper[4914]: I0127 14:02:54.408401 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-vxpjb" event={"ID":"9cce5d04-a158-45e3-9512-2c97d2d9d4ff","Type":"ContainerStarted","Data":"9baed2381725e71b75f537f507bf63c5038a41a179dd11ad2d6b6969ed75d429"} Jan 27 14:02:55 crc kubenswrapper[4914]: I0127 14:02:55.929842 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-vxpjb"] Jan 27 14:02:55 crc kubenswrapper[4914]: I0127 14:02:55.956333 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-xfjdd"] Jan 27 14:02:55 crc kubenswrapper[4914]: I0127 14:02:55.957404 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" Jan 27 14:02:55 crc kubenswrapper[4914]: I0127 14:02:55.967999 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-xfjdd"] Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.009295 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-xfjdd\" (UID: \"6b092b45-66cf-4bf2-9612-c43f2bf7b8df\") " pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.009356 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mwmw\" (UniqueName: \"kubernetes.io/projected/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-kube-api-access-7mwmw\") pod \"dnsmasq-dns-744ffd65bc-xfjdd\" (UID: \"6b092b45-66cf-4bf2-9612-c43f2bf7b8df\") " pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.009410 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-config\") pod \"dnsmasq-dns-744ffd65bc-xfjdd\" (UID: \"6b092b45-66cf-4bf2-9612-c43f2bf7b8df\") " pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.117148 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-xfjdd\" (UID: \"6b092b45-66cf-4bf2-9612-c43f2bf7b8df\") " pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.117230 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mwmw\" (UniqueName: \"kubernetes.io/projected/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-kube-api-access-7mwmw\") pod \"dnsmasq-dns-744ffd65bc-xfjdd\" (UID: \"6b092b45-66cf-4bf2-9612-c43f2bf7b8df\") " pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.117284 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-config\") pod \"dnsmasq-dns-744ffd65bc-xfjdd\" (UID: \"6b092b45-66cf-4bf2-9612-c43f2bf7b8df\") " pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.118305 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-xfjdd\" (UID: \"6b092b45-66cf-4bf2-9612-c43f2bf7b8df\") " pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.118373 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-config\") pod \"dnsmasq-dns-744ffd65bc-xfjdd\" (UID: \"6b092b45-66cf-4bf2-9612-c43f2bf7b8df\") " pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.137792 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mwmw\" (UniqueName: \"kubernetes.io/projected/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-kube-api-access-7mwmw\") pod \"dnsmasq-dns-744ffd65bc-xfjdd\" (UID: \"6b092b45-66cf-4bf2-9612-c43f2bf7b8df\") " pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.296093 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.328756 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-5srxf"] Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.354913 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-sjkwk"] Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.355941 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.357382 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46aa6aa7-cacb-4442-9ff2-04962172adae-config\") pod \"dnsmasq-dns-95f5f6995-sjkwk\" (UID: \"46aa6aa7-cacb-4442-9ff2-04962172adae\") " pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.357457 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k92fn\" (UniqueName: \"kubernetes.io/projected/46aa6aa7-cacb-4442-9ff2-04962172adae-kube-api-access-k92fn\") pod \"dnsmasq-dns-95f5f6995-sjkwk\" (UID: \"46aa6aa7-cacb-4442-9ff2-04962172adae\") " pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.357597 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46aa6aa7-cacb-4442-9ff2-04962172adae-dns-svc\") pod \"dnsmasq-dns-95f5f6995-sjkwk\" (UID: \"46aa6aa7-cacb-4442-9ff2-04962172adae\") " pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.377568 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-sjkwk"] Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.459667 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46aa6aa7-cacb-4442-9ff2-04962172adae-dns-svc\") pod \"dnsmasq-dns-95f5f6995-sjkwk\" (UID: \"46aa6aa7-cacb-4442-9ff2-04962172adae\") " pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.460002 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46aa6aa7-cacb-4442-9ff2-04962172adae-config\") pod \"dnsmasq-dns-95f5f6995-sjkwk\" (UID: \"46aa6aa7-cacb-4442-9ff2-04962172adae\") " pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.460034 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k92fn\" (UniqueName: \"kubernetes.io/projected/46aa6aa7-cacb-4442-9ff2-04962172adae-kube-api-access-k92fn\") pod \"dnsmasq-dns-95f5f6995-sjkwk\" (UID: \"46aa6aa7-cacb-4442-9ff2-04962172adae\") " pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.461387 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46aa6aa7-cacb-4442-9ff2-04962172adae-dns-svc\") pod \"dnsmasq-dns-95f5f6995-sjkwk\" (UID: \"46aa6aa7-cacb-4442-9ff2-04962172adae\") " pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.461664 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46aa6aa7-cacb-4442-9ff2-04962172adae-config\") pod \"dnsmasq-dns-95f5f6995-sjkwk\" (UID: \"46aa6aa7-cacb-4442-9ff2-04962172adae\") " pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.492769 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k92fn\" (UniqueName: \"kubernetes.io/projected/46aa6aa7-cacb-4442-9ff2-04962172adae-kube-api-access-k92fn\") pod \"dnsmasq-dns-95f5f6995-sjkwk\" (UID: \"46aa6aa7-cacb-4442-9ff2-04962172adae\") " pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.710818 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.853503 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-xfjdd"] Jan 27 14:02:56 crc kubenswrapper[4914]: W0127 14:02:56.864063 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b092b45_66cf_4bf2_9612_c43f2bf7b8df.slice/crio-bad9d52d59c4bb2dadd2613fbd09978b393d7fe1e98732fc6b12e639088c0f85 WatchSource:0}: Error finding container bad9d52d59c4bb2dadd2613fbd09978b393d7fe1e98732fc6b12e639088c0f85: Status 404 returned error can't find the container with id bad9d52d59c4bb2dadd2613fbd09978b393d7fe1e98732fc6b12e639088c0f85 Jan 27 14:02:56 crc kubenswrapper[4914]: I0127 14:02:56.999475 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-sjkwk"] Jan 27 14:02:57 crc kubenswrapper[4914]: W0127 14:02:57.001119 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46aa6aa7_cacb_4442_9ff2_04962172adae.slice/crio-92ad538134a7da6632d237af57570c4f5a599e3c2f5bd5b951b1058bde8b6e3a WatchSource:0}: Error finding container 92ad538134a7da6632d237af57570c4f5a599e3c2f5bd5b951b1058bde8b6e3a: Status 404 returned error can't find the container with id 92ad538134a7da6632d237af57570c4f5a599e3c2f5bd5b951b1058bde8b6e3a Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.114274 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.115720 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.118027 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.118574 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.119714 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.120380 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.120565 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-j27zd" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.120803 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.121245 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.131293 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.274951 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqrm8\" (UniqueName: \"kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-kube-api-access-lqrm8\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.275329 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.275366 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.275392 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.275432 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-config-data\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.275458 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.275484 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.275515 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.275735 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9dc0242e-0a62-4f1c-b978-00f6b2651429-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.275867 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.275917 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9dc0242e-0a62-4f1c-b978-00f6b2651429-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.377131 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-config-data\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.377190 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.377223 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.377256 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.377288 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9dc0242e-0a62-4f1c-b978-00f6b2651429-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.377328 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.377351 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9dc0242e-0a62-4f1c-b978-00f6b2651429-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.377374 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqrm8\" (UniqueName: \"kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-kube-api-access-lqrm8\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.377419 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.377447 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.377471 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.378499 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.379378 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-config-data\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.381032 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.381085 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.381329 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.381927 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.386402 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.386627 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9dc0242e-0a62-4f1c-b978-00f6b2651429-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.386900 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9dc0242e-0a62-4f1c-b978-00f6b2651429-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.400578 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.400647 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqrm8\" (UniqueName: \"kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-kube-api-access-lqrm8\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.403630 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.470275 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" event={"ID":"46aa6aa7-cacb-4442-9ff2-04962172adae","Type":"ContainerStarted","Data":"92ad538134a7da6632d237af57570c4f5a599e3c2f5bd5b951b1058bde8b6e3a"} Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.475592 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" event={"ID":"6b092b45-66cf-4bf2-9612-c43f2bf7b8df","Type":"ContainerStarted","Data":"bad9d52d59c4bb2dadd2613fbd09978b393d7fe1e98732fc6b12e639088c0f85"} Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.483879 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.484357 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.485473 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.494633 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.495768 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.496080 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7n2f5" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.496440 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.496600 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.496693 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.496775 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.496979 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.583402 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.583448 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.583471 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.583639 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.583701 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.583727 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.583773 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.583862 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ead132f0-586e-402b-87bb-f7109396498d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.583960 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.584070 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwjhk\" (UniqueName: \"kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-kube-api-access-vwjhk\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.584239 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ead132f0-586e-402b-87bb-f7109396498d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.685560 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.685623 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwjhk\" (UniqueName: \"kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-kube-api-access-vwjhk\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.685647 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ead132f0-586e-402b-87bb-f7109396498d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.685709 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.685732 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.685803 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.685930 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.685962 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.686014 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.686073 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.686127 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ead132f0-586e-402b-87bb-f7109396498d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.687224 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.687438 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.687548 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.687767 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.688374 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.691971 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ead132f0-586e-402b-87bb-f7109396498d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.696358 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.698212 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ead132f0-586e-402b-87bb-f7109396498d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.712466 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.716488 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwjhk\" (UniqueName: \"kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-kube-api-access-vwjhk\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.734074 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.754325 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:57 crc kubenswrapper[4914]: I0127 14:02:57.853614 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:02:58 crc kubenswrapper[4914]: I0127 14:02:58.818025 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 14:02:58 crc kubenswrapper[4914]: I0127 14:02:58.819935 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 14:02:58 crc kubenswrapper[4914]: I0127 14:02:58.822443 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-x95qq" Jan 27 14:02:58 crc kubenswrapper[4914]: I0127 14:02:58.822885 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 14:02:58 crc kubenswrapper[4914]: I0127 14:02:58.830294 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 14:02:58 crc kubenswrapper[4914]: I0127 14:02:58.830885 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 14:02:58 crc kubenswrapper[4914]: I0127 14:02:58.835770 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 14:02:58 crc kubenswrapper[4914]: I0127 14:02:58.836307 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 14:02:58 crc kubenswrapper[4914]: I0127 14:02:58.907824 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd387894-ddd7-4982-b8be-bb8bcea88486-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:58 crc kubenswrapper[4914]: I0127 14:02:58.908061 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd387894-ddd7-4982-b8be-bb8bcea88486-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:58 crc kubenswrapper[4914]: I0127 14:02:58.908158 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fd387894-ddd7-4982-b8be-bb8bcea88486-kolla-config\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:58 crc kubenswrapper[4914]: I0127 14:02:58.908281 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fd387894-ddd7-4982-b8be-bb8bcea88486-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:58 crc kubenswrapper[4914]: I0127 14:02:58.908359 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd387894-ddd7-4982-b8be-bb8bcea88486-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:58 crc kubenswrapper[4914]: I0127 14:02:58.908392 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:58 crc kubenswrapper[4914]: I0127 14:02:58.908456 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fd387894-ddd7-4982-b8be-bb8bcea88486-config-data-default\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:58 crc kubenswrapper[4914]: I0127 14:02:58.908528 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-549l7\" (UniqueName: \"kubernetes.io/projected/fd387894-ddd7-4982-b8be-bb8bcea88486-kube-api-access-549l7\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.009909 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd387894-ddd7-4982-b8be-bb8bcea88486-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.009978 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd387894-ddd7-4982-b8be-bb8bcea88486-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.010006 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fd387894-ddd7-4982-b8be-bb8bcea88486-kolla-config\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.010064 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fd387894-ddd7-4982-b8be-bb8bcea88486-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.010091 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd387894-ddd7-4982-b8be-bb8bcea88486-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.010308 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.010631 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fd387894-ddd7-4982-b8be-bb8bcea88486-config-data-default\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.010579 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fd387894-ddd7-4982-b8be-bb8bcea88486-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.010696 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.010706 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-549l7\" (UniqueName: \"kubernetes.io/projected/fd387894-ddd7-4982-b8be-bb8bcea88486-kube-api-access-549l7\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.011432 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fd387894-ddd7-4982-b8be-bb8bcea88486-config-data-default\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.011495 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd387894-ddd7-4982-b8be-bb8bcea88486-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.014251 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd387894-ddd7-4982-b8be-bb8bcea88486-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.029900 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd387894-ddd7-4982-b8be-bb8bcea88486-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.030069 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fd387894-ddd7-4982-b8be-bb8bcea88486-kolla-config\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.030493 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-549l7\" (UniqueName: \"kubernetes.io/projected/fd387894-ddd7-4982-b8be-bb8bcea88486-kube-api-access-549l7\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.041293 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"fd387894-ddd7-4982-b8be-bb8bcea88486\") " pod="openstack/openstack-galera-0" Jan 27 14:02:59 crc kubenswrapper[4914]: I0127 14:02:59.159746 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.059335 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.061356 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.063791 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-w2h49" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.064025 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.064307 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.064774 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.069766 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.138271 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.138360 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/541d0024-7ae1-4b5a-b139-35fe77463191-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.138435 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/541d0024-7ae1-4b5a-b139-35fe77463191-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.138719 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/541d0024-7ae1-4b5a-b139-35fe77463191-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.138790 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541d0024-7ae1-4b5a-b139-35fe77463191-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.138852 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/541d0024-7ae1-4b5a-b139-35fe77463191-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.138878 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glzr8\" (UniqueName: \"kubernetes.io/projected/541d0024-7ae1-4b5a-b139-35fe77463191-kube-api-access-glzr8\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.138981 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/541d0024-7ae1-4b5a-b139-35fe77463191-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.240945 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/541d0024-7ae1-4b5a-b139-35fe77463191-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.241027 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.241084 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/541d0024-7ae1-4b5a-b139-35fe77463191-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.241129 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/541d0024-7ae1-4b5a-b139-35fe77463191-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.241161 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/541d0024-7ae1-4b5a-b139-35fe77463191-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.241196 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541d0024-7ae1-4b5a-b139-35fe77463191-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.241223 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/541d0024-7ae1-4b5a-b139-35fe77463191-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.241251 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glzr8\" (UniqueName: \"kubernetes.io/projected/541d0024-7ae1-4b5a-b139-35fe77463191-kube-api-access-glzr8\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.241268 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.241822 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/541d0024-7ae1-4b5a-b139-35fe77463191-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.242261 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/541d0024-7ae1-4b5a-b139-35fe77463191-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.243313 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/541d0024-7ae1-4b5a-b139-35fe77463191-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.243791 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/541d0024-7ae1-4b5a-b139-35fe77463191-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.247160 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/541d0024-7ae1-4b5a-b139-35fe77463191-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.248419 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541d0024-7ae1-4b5a-b139-35fe77463191-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.262507 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glzr8\" (UniqueName: \"kubernetes.io/projected/541d0024-7ae1-4b5a-b139-35fe77463191-kube-api-access-glzr8\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.281445 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"541d0024-7ae1-4b5a-b139-35fe77463191\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.385881 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.386592 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.386754 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.391731 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.391811 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-lwm8p" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.392021 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.408857 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.546180 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ce960bf5-10e9-4b71-a092-a5b4013adbdf-config-data\") pod \"memcached-0\" (UID: \"ce960bf5-10e9-4b71-a092-a5b4013adbdf\") " pod="openstack/memcached-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.546299 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce960bf5-10e9-4b71-a092-a5b4013adbdf-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ce960bf5-10e9-4b71-a092-a5b4013adbdf\") " pod="openstack/memcached-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.546341 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv4cp\" (UniqueName: \"kubernetes.io/projected/ce960bf5-10e9-4b71-a092-a5b4013adbdf-kube-api-access-zv4cp\") pod \"memcached-0\" (UID: \"ce960bf5-10e9-4b71-a092-a5b4013adbdf\") " pod="openstack/memcached-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.546363 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce960bf5-10e9-4b71-a092-a5b4013adbdf-kolla-config\") pod \"memcached-0\" (UID: \"ce960bf5-10e9-4b71-a092-a5b4013adbdf\") " pod="openstack/memcached-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.546487 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce960bf5-10e9-4b71-a092-a5b4013adbdf-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ce960bf5-10e9-4b71-a092-a5b4013adbdf\") " pod="openstack/memcached-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.647654 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ce960bf5-10e9-4b71-a092-a5b4013adbdf-config-data\") pod \"memcached-0\" (UID: \"ce960bf5-10e9-4b71-a092-a5b4013adbdf\") " pod="openstack/memcached-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.647710 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce960bf5-10e9-4b71-a092-a5b4013adbdf-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ce960bf5-10e9-4b71-a092-a5b4013adbdf\") " pod="openstack/memcached-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.647754 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv4cp\" (UniqueName: \"kubernetes.io/projected/ce960bf5-10e9-4b71-a092-a5b4013adbdf-kube-api-access-zv4cp\") pod \"memcached-0\" (UID: \"ce960bf5-10e9-4b71-a092-a5b4013adbdf\") " pod="openstack/memcached-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.647789 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce960bf5-10e9-4b71-a092-a5b4013adbdf-kolla-config\") pod \"memcached-0\" (UID: \"ce960bf5-10e9-4b71-a092-a5b4013adbdf\") " pod="openstack/memcached-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.647865 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce960bf5-10e9-4b71-a092-a5b4013adbdf-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ce960bf5-10e9-4b71-a092-a5b4013adbdf\") " pod="openstack/memcached-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.651619 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ce960bf5-10e9-4b71-a092-a5b4013adbdf-config-data\") pod \"memcached-0\" (UID: \"ce960bf5-10e9-4b71-a092-a5b4013adbdf\") " pod="openstack/memcached-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.651780 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce960bf5-10e9-4b71-a092-a5b4013adbdf-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ce960bf5-10e9-4b71-a092-a5b4013adbdf\") " pod="openstack/memcached-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.651940 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce960bf5-10e9-4b71-a092-a5b4013adbdf-kolla-config\") pod \"memcached-0\" (UID: \"ce960bf5-10e9-4b71-a092-a5b4013adbdf\") " pod="openstack/memcached-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.669645 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce960bf5-10e9-4b71-a092-a5b4013adbdf-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ce960bf5-10e9-4b71-a092-a5b4013adbdf\") " pod="openstack/memcached-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.672436 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv4cp\" (UniqueName: \"kubernetes.io/projected/ce960bf5-10e9-4b71-a092-a5b4013adbdf-kube-api-access-zv4cp\") pod \"memcached-0\" (UID: \"ce960bf5-10e9-4b71-a092-a5b4013adbdf\") " pod="openstack/memcached-0" Jan 27 14:03:00 crc kubenswrapper[4914]: I0127 14:03:00.718142 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 14:03:02 crc kubenswrapper[4914]: I0127 14:03:02.728750 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:03:02 crc kubenswrapper[4914]: I0127 14:03:02.732069 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:03:02 crc kubenswrapper[4914]: I0127 14:03:02.738359 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-6wsrg" Jan 27 14:03:02 crc kubenswrapper[4914]: I0127 14:03:02.741116 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:03:02 crc kubenswrapper[4914]: I0127 14:03:02.884009 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t2ss\" (UniqueName: \"kubernetes.io/projected/cc356ffd-559b-403b-8f0f-8bb7518dd9b7-kube-api-access-2t2ss\") pod \"kube-state-metrics-0\" (UID: \"cc356ffd-559b-403b-8f0f-8bb7518dd9b7\") " pod="openstack/kube-state-metrics-0" Jan 27 14:03:02 crc kubenswrapper[4914]: I0127 14:03:02.987746 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t2ss\" (UniqueName: \"kubernetes.io/projected/cc356ffd-559b-403b-8f0f-8bb7518dd9b7-kube-api-access-2t2ss\") pod \"kube-state-metrics-0\" (UID: \"cc356ffd-559b-403b-8f0f-8bb7518dd9b7\") " pod="openstack/kube-state-metrics-0" Jan 27 14:03:03 crc kubenswrapper[4914]: I0127 14:03:03.017504 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t2ss\" (UniqueName: \"kubernetes.io/projected/cc356ffd-559b-403b-8f0f-8bb7518dd9b7-kube-api-access-2t2ss\") pod \"kube-state-metrics-0\" (UID: \"cc356ffd-559b-403b-8f0f-8bb7518dd9b7\") " pod="openstack/kube-state-metrics-0" Jan 27 14:03:03 crc kubenswrapper[4914]: I0127 14:03:03.048301 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:03:05 crc kubenswrapper[4914]: I0127 14:03:05.892628 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fwgsp"] Jan 27 14:03:05 crc kubenswrapper[4914]: I0127 14:03:05.894529 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:05 crc kubenswrapper[4914]: I0127 14:03:05.897802 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 14:03:05 crc kubenswrapper[4914]: I0127 14:03:05.898031 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 14:03:05 crc kubenswrapper[4914]: I0127 14:03:05.898630 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-8hn58" Jan 27 14:03:05 crc kubenswrapper[4914]: I0127 14:03:05.900660 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-d2xdr"] Jan 27 14:03:05 crc kubenswrapper[4914]: I0127 14:03:05.903022 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:05 crc kubenswrapper[4914]: I0127 14:03:05.925041 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fwgsp"] Jan 27 14:03:05 crc kubenswrapper[4914]: I0127 14:03:05.954517 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-d2xdr"] Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.032380 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpknx\" (UniqueName: \"kubernetes.io/projected/16d7aef1-746e-4166-a82d-e40371ebc96c-kube-api-access-vpknx\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.032455 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7668e140-246e-470b-8988-8d716fa6580b-var-run\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.032599 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gsrc\" (UniqueName: \"kubernetes.io/projected/7668e140-246e-470b-8988-8d716fa6580b-kube-api-access-4gsrc\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.032632 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7668e140-246e-470b-8988-8d716fa6580b-var-log\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.032653 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/16d7aef1-746e-4166-a82d-e40371ebc96c-var-run-ovn\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.032701 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/16d7aef1-746e-4166-a82d-e40371ebc96c-ovn-controller-tls-certs\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.032722 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d7aef1-746e-4166-a82d-e40371ebc96c-combined-ca-bundle\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.032759 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/16d7aef1-746e-4166-a82d-e40371ebc96c-var-log-ovn\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.032796 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7668e140-246e-470b-8988-8d716fa6580b-scripts\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.032825 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16d7aef1-746e-4166-a82d-e40371ebc96c-var-run\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.033065 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7668e140-246e-470b-8988-8d716fa6580b-etc-ovs\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.033119 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16d7aef1-746e-4166-a82d-e40371ebc96c-scripts\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.033247 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7668e140-246e-470b-8988-8d716fa6580b-var-lib\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.135145 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7668e140-246e-470b-8988-8d716fa6580b-var-lib\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.135226 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpknx\" (UniqueName: \"kubernetes.io/projected/16d7aef1-746e-4166-a82d-e40371ebc96c-kube-api-access-vpknx\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.135251 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7668e140-246e-470b-8988-8d716fa6580b-var-run\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.135275 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gsrc\" (UniqueName: \"kubernetes.io/projected/7668e140-246e-470b-8988-8d716fa6580b-kube-api-access-4gsrc\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.135293 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7668e140-246e-470b-8988-8d716fa6580b-var-log\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.135307 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/16d7aef1-746e-4166-a82d-e40371ebc96c-var-run-ovn\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.135343 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/16d7aef1-746e-4166-a82d-e40371ebc96c-ovn-controller-tls-certs\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.135358 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d7aef1-746e-4166-a82d-e40371ebc96c-combined-ca-bundle\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.135382 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/16d7aef1-746e-4166-a82d-e40371ebc96c-var-log-ovn\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.135411 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7668e140-246e-470b-8988-8d716fa6580b-scripts\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.135433 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16d7aef1-746e-4166-a82d-e40371ebc96c-var-run\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.135464 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7668e140-246e-470b-8988-8d716fa6580b-etc-ovs\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.135481 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16d7aef1-746e-4166-a82d-e40371ebc96c-scripts\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.138222 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16d7aef1-746e-4166-a82d-e40371ebc96c-scripts\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.139114 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16d7aef1-746e-4166-a82d-e40371ebc96c-var-run\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.139301 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/16d7aef1-746e-4166-a82d-e40371ebc96c-var-log-ovn\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.139352 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7668e140-246e-470b-8988-8d716fa6580b-var-run\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.139448 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7668e140-246e-470b-8988-8d716fa6580b-var-log\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.139526 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/16d7aef1-746e-4166-a82d-e40371ebc96c-var-run-ovn\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.139537 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7668e140-246e-470b-8988-8d716fa6580b-etc-ovs\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.139676 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7668e140-246e-470b-8988-8d716fa6580b-var-lib\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.140898 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7668e140-246e-470b-8988-8d716fa6580b-scripts\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.157945 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/16d7aef1-746e-4166-a82d-e40371ebc96c-ovn-controller-tls-certs\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.164924 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d7aef1-746e-4166-a82d-e40371ebc96c-combined-ca-bundle\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.167478 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gsrc\" (UniqueName: \"kubernetes.io/projected/7668e140-246e-470b-8988-8d716fa6580b-kube-api-access-4gsrc\") pod \"ovn-controller-ovs-d2xdr\" (UID: \"7668e140-246e-470b-8988-8d716fa6580b\") " pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.168771 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpknx\" (UniqueName: \"kubernetes.io/projected/16d7aef1-746e-4166-a82d-e40371ebc96c-kube-api-access-vpknx\") pod \"ovn-controller-fwgsp\" (UID: \"16d7aef1-746e-4166-a82d-e40371ebc96c\") " pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.229433 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.246139 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.528644 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.532390 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.541185 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.547718 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.547930 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.547983 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zjrhq" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.547996 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.548880 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.641580 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.641645 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c74db2f0-b0f5-420d-970a-ecebd81bff03-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.641676 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c74db2f0-b0f5-420d-970a-ecebd81bff03-config\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.641707 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlpk2\" (UniqueName: \"kubernetes.io/projected/c74db2f0-b0f5-420d-970a-ecebd81bff03-kube-api-access-wlpk2\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.641749 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c74db2f0-b0f5-420d-970a-ecebd81bff03-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.641777 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c74db2f0-b0f5-420d-970a-ecebd81bff03-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.641797 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c74db2f0-b0f5-420d-970a-ecebd81bff03-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.641869 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c74db2f0-b0f5-420d-970a-ecebd81bff03-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.743211 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c74db2f0-b0f5-420d-970a-ecebd81bff03-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.743291 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c74db2f0-b0f5-420d-970a-ecebd81bff03-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.743358 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c74db2f0-b0f5-420d-970a-ecebd81bff03-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.743428 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.743463 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c74db2f0-b0f5-420d-970a-ecebd81bff03-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.743488 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c74db2f0-b0f5-420d-970a-ecebd81bff03-config\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.743520 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlpk2\" (UniqueName: \"kubernetes.io/projected/c74db2f0-b0f5-420d-970a-ecebd81bff03-kube-api-access-wlpk2\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.743551 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c74db2f0-b0f5-420d-970a-ecebd81bff03-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.743752 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c74db2f0-b0f5-420d-970a-ecebd81bff03-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.744082 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.744459 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c74db2f0-b0f5-420d-970a-ecebd81bff03-config\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.744741 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c74db2f0-b0f5-420d-970a-ecebd81bff03-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.747814 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c74db2f0-b0f5-420d-970a-ecebd81bff03-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.748279 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c74db2f0-b0f5-420d-970a-ecebd81bff03-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.748795 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c74db2f0-b0f5-420d-970a-ecebd81bff03-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.760185 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlpk2\" (UniqueName: \"kubernetes.io/projected/c74db2f0-b0f5-420d-970a-ecebd81bff03-kube-api-access-wlpk2\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.763122 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c74db2f0-b0f5-420d-970a-ecebd81bff03\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:06 crc kubenswrapper[4914]: I0127 14:03:06.862129 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:07 crc kubenswrapper[4914]: I0127 14:03:07.691294 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:03:07 crc kubenswrapper[4914]: I0127 14:03:07.691745 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:03:07 crc kubenswrapper[4914]: I0127 14:03:07.691792 4914 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 14:03:07 crc kubenswrapper[4914]: I0127 14:03:07.692513 4914 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1eaab6549a5f3b2138f4d755eedcddc2ad9f911aba3a749ef9e6dd2fe3f38be3"} pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:03:07 crc kubenswrapper[4914]: I0127 14:03:07.692567 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" containerID="cri-o://1eaab6549a5f3b2138f4d755eedcddc2ad9f911aba3a749ef9e6dd2fe3f38be3" gracePeriod=600 Jan 27 14:03:08 crc kubenswrapper[4914]: I0127 14:03:08.556156 4914 generic.go:334] "Generic (PLEG): container finished" podID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerID="1eaab6549a5f3b2138f4d755eedcddc2ad9f911aba3a749ef9e6dd2fe3f38be3" exitCode=0 Jan 27 14:03:08 crc kubenswrapper[4914]: I0127 14:03:08.556194 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerDied","Data":"1eaab6549a5f3b2138f4d755eedcddc2ad9f911aba3a749ef9e6dd2fe3f38be3"} Jan 27 14:03:08 crc kubenswrapper[4914]: I0127 14:03:08.556224 4914 scope.go:117] "RemoveContainer" containerID="a0fd1f806130ae08db1dd18a20f06b6fe85e397f8f5aa2658045094f139caa41" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.108768 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.110328 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.113459 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-wmt5b" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.113879 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.114188 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.115308 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.121118 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.181701 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe285074-4726-42c1-99cc-d99be63c1cbc-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.181756 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe285074-4726-42c1-99cc-d99be63c1cbc-config\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.181781 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fe285074-4726-42c1-99cc-d99be63c1cbc-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.181797 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe285074-4726-42c1-99cc-d99be63c1cbc-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.181814 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.181863 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97d7w\" (UniqueName: \"kubernetes.io/projected/fe285074-4726-42c1-99cc-d99be63c1cbc-kube-api-access-97d7w\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.182668 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe285074-4726-42c1-99cc-d99be63c1cbc-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.182729 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe285074-4726-42c1-99cc-d99be63c1cbc-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.285358 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fe285074-4726-42c1-99cc-d99be63c1cbc-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.285400 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe285074-4726-42c1-99cc-d99be63c1cbc-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.285521 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.285544 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97d7w\" (UniqueName: \"kubernetes.io/projected/fe285074-4726-42c1-99cc-d99be63c1cbc-kube-api-access-97d7w\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.285612 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe285074-4726-42c1-99cc-d99be63c1cbc-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.285632 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe285074-4726-42c1-99cc-d99be63c1cbc-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.285688 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe285074-4726-42c1-99cc-d99be63c1cbc-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.285709 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe285074-4726-42c1-99cc-d99be63c1cbc-config\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.285952 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.286643 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe285074-4726-42c1-99cc-d99be63c1cbc-config\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.286852 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe285074-4726-42c1-99cc-d99be63c1cbc-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.287188 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fe285074-4726-42c1-99cc-d99be63c1cbc-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.296255 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe285074-4726-42c1-99cc-d99be63c1cbc-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.296345 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe285074-4726-42c1-99cc-d99be63c1cbc-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.305470 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97d7w\" (UniqueName: \"kubernetes.io/projected/fe285074-4726-42c1-99cc-d99be63c1cbc-kube-api-access-97d7w\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.305637 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe285074-4726-42c1-99cc-d99be63c1cbc-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.312350 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"fe285074-4726-42c1-99cc-d99be63c1cbc\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:09 crc kubenswrapper[4914]: I0127 14:03:09.431651 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:10 crc kubenswrapper[4914]: E0127 14:03:10.780493 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 27 14:03:10 crc kubenswrapper[4914]: E0127 14:03:10.781092 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hz9nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-vxpjb_openstack(9cce5d04-a158-45e3-9512-2c97d2d9d4ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:03:10 crc kubenswrapper[4914]: E0127 14:03:10.782294 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-vxpjb" podUID="9cce5d04-a158-45e3-9512-2c97d2d9d4ff" Jan 27 14:03:10 crc kubenswrapper[4914]: E0127 14:03:10.944516 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 27 14:03:10 crc kubenswrapper[4914]: E0127 14:03:10.944664 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5j9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-5srxf_openstack(98ea0694-2ad6-4e8f-ad31-69f3506e0d90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:03:10 crc kubenswrapper[4914]: E0127 14:03:10.945896 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-5srxf" podUID="98ea0694-2ad6-4e8f-ad31-69f3506e0d90" Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.162482 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:03:11 crc kubenswrapper[4914]: W0127 14:03:11.169926 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dc0242e_0a62_4f1c_b978_00f6b2651429.slice/crio-31b1c3b1a1709616dbd7c6dad02dd0ec92db6c6158ac5ac4c69c452e06450637 WatchSource:0}: Error finding container 31b1c3b1a1709616dbd7c6dad02dd0ec92db6c6158ac5ac4c69c452e06450637: Status 404 returned error can't find the container with id 31b1c3b1a1709616dbd7c6dad02dd0ec92db6c6158ac5ac4c69c452e06450637 Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.172511 4914 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.240551 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 14:03:11 crc kubenswrapper[4914]: W0127 14:03:11.249469 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd387894_ddd7_4982_b8be_bb8bcea88486.slice/crio-6dd501c229edec3322100a458576a989d5cafbbe0fb5eb074124f88a79a69854 WatchSource:0}: Error finding container 6dd501c229edec3322100a458576a989d5cafbbe0fb5eb074124f88a79a69854: Status 404 returned error can't find the container with id 6dd501c229edec3322100a458576a989d5cafbbe0fb5eb074124f88a79a69854 Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.356007 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.502468 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.511403 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.536556 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 14:03:11 crc kubenswrapper[4914]: W0127 14:03:11.538529 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc356ffd_559b_403b_8f0f_8bb7518dd9b7.slice/crio-3aa1459f1e922323f59079f3f99b59e522ea765b0918ce984947c17cde4f0d06 WatchSource:0}: Error finding container 3aa1459f1e922323f59079f3f99b59e522ea765b0918ce984947c17cde4f0d06: Status 404 returned error can't find the container with id 3aa1459f1e922323f59079f3f99b59e522ea765b0918ce984947c17cde4f0d06 Jan 27 14:03:11 crc kubenswrapper[4914]: W0127 14:03:11.562060 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod541d0024_7ae1_4b5a_b139_35fe77463191.slice/crio-18ce033abe816cb4e4dc124c73482f7967a56204e07b3a72549e21adb6851278 WatchSource:0}: Error finding container 18ce033abe816cb4e4dc124c73482f7967a56204e07b3a72549e21adb6851278: Status 404 returned error can't find the container with id 18ce033abe816cb4e4dc124c73482f7967a56204e07b3a72549e21adb6851278 Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.578928 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"541d0024-7ae1-4b5a-b139-35fe77463191","Type":"ContainerStarted","Data":"18ce033abe816cb4e4dc124c73482f7967a56204e07b3a72549e21adb6851278"} Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.583353 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerStarted","Data":"d7c21b1cd9cda80b642f46a096fe84b98a11cc182c636f7e3bfaf4ae3f160417"} Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.585966 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9dc0242e-0a62-4f1c-b978-00f6b2651429","Type":"ContainerStarted","Data":"31b1c3b1a1709616dbd7c6dad02dd0ec92db6c6158ac5ac4c69c452e06450637"} Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.588164 4914 generic.go:334] "Generic (PLEG): container finished" podID="46aa6aa7-cacb-4442-9ff2-04962172adae" containerID="04956f3ea9ad99e0f0760fade36dcb30dfa5a8bd75fab400c61cf927f8549ca6" exitCode=0 Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.588397 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" event={"ID":"46aa6aa7-cacb-4442-9ff2-04962172adae","Type":"ContainerDied","Data":"04956f3ea9ad99e0f0760fade36dcb30dfa5a8bd75fab400c61cf927f8549ca6"} Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.590058 4914 generic.go:334] "Generic (PLEG): container finished" podID="6b092b45-66cf-4bf2-9612-c43f2bf7b8df" containerID="793d977bc533c60536401e5d57d2b57e8e738f727578b0662eb99b0bf4e39573" exitCode=0 Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.590082 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" event={"ID":"6b092b45-66cf-4bf2-9612-c43f2bf7b8df","Type":"ContainerDied","Data":"793d977bc533c60536401e5d57d2b57e8e738f727578b0662eb99b0bf4e39573"} Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.593045 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ce960bf5-10e9-4b71-a092-a5b4013adbdf","Type":"ContainerStarted","Data":"b3915f7e09466daeed1ba4afcc097292298a39a1c1aa18adf03f61e7906681f8"} Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.596557 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fd387894-ddd7-4982-b8be-bb8bcea88486","Type":"ContainerStarted","Data":"6dd501c229edec3322100a458576a989d5cafbbe0fb5eb074124f88a79a69854"} Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.604260 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ead132f0-586e-402b-87bb-f7109396498d","Type":"ContainerStarted","Data":"0127ddfa307129ca7e976b76f78db663c2a0d5c0db9c6ac3bd4b97acf18d678f"} Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.605752 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc356ffd-559b-403b-8f0f-8bb7518dd9b7","Type":"ContainerStarted","Data":"3aa1459f1e922323f59079f3f99b59e522ea765b0918ce984947c17cde4f0d06"} Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.622628 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 14:03:11 crc kubenswrapper[4914]: I0127 14:03:11.683620 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fwgsp"] Jan 27 14:03:11 crc kubenswrapper[4914]: W0127 14:03:11.692259 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16d7aef1_746e_4166_a82d_e40371ebc96c.slice/crio-e033698f18d803f5122a99f31607b984537a6cf38bc07d8cc9c3b2577c167cc6 WatchSource:0}: Error finding container e033698f18d803f5122a99f31607b984537a6cf38bc07d8cc9c3b2577c167cc6: Status 404 returned error can't find the container with id e033698f18d803f5122a99f31607b984537a6cf38bc07d8cc9c3b2577c167cc6 Jan 27 14:03:11 crc kubenswrapper[4914]: E0127 14:03:11.868512 4914 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 27 14:03:11 crc kubenswrapper[4914]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/6b092b45-66cf-4bf2-9612-c43f2bf7b8df/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 14:03:11 crc kubenswrapper[4914]: > podSandboxID="bad9d52d59c4bb2dadd2613fbd09978b393d7fe1e98732fc6b12e639088c0f85" Jan 27 14:03:11 crc kubenswrapper[4914]: E0127 14:03:11.868706 4914 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 14:03:11 crc kubenswrapper[4914]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mwmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-xfjdd_openstack(6b092b45-66cf-4bf2-9612-c43f2bf7b8df): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/6b092b45-66cf-4bf2-9612-c43f2bf7b8df/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 14:03:11 crc kubenswrapper[4914]: > logger="UnhandledError" Jan 27 14:03:11 crc kubenswrapper[4914]: E0127 14:03:11.870154 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/6b092b45-66cf-4bf2-9612-c43f2bf7b8df/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" podUID="6b092b45-66cf-4bf2-9612-c43f2bf7b8df" Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.003266 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-vxpjb" Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.008375 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-5srxf" Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.057018 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz9nf\" (UniqueName: \"kubernetes.io/projected/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-kube-api-access-hz9nf\") pod \"9cce5d04-a158-45e3-9512-2c97d2d9d4ff\" (UID: \"9cce5d04-a158-45e3-9512-2c97d2d9d4ff\") " Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.057137 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-config\") pod \"9cce5d04-a158-45e3-9512-2c97d2d9d4ff\" (UID: \"9cce5d04-a158-45e3-9512-2c97d2d9d4ff\") " Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.057266 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-dns-svc\") pod \"9cce5d04-a158-45e3-9512-2c97d2d9d4ff\" (UID: \"9cce5d04-a158-45e3-9512-2c97d2d9d4ff\") " Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.057716 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9cce5d04-a158-45e3-9512-2c97d2d9d4ff" (UID: "9cce5d04-a158-45e3-9512-2c97d2d9d4ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.057754 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-config" (OuterVolumeSpecName: "config") pod "9cce5d04-a158-45e3-9512-2c97d2d9d4ff" (UID: "9cce5d04-a158-45e3-9512-2c97d2d9d4ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.063030 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-kube-api-access-hz9nf" (OuterVolumeSpecName: "kube-api-access-hz9nf") pod "9cce5d04-a158-45e3-9512-2c97d2d9d4ff" (UID: "9cce5d04-a158-45e3-9512-2c97d2d9d4ff"). InnerVolumeSpecName "kube-api-access-hz9nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.158934 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j9sl\" (UniqueName: \"kubernetes.io/projected/98ea0694-2ad6-4e8f-ad31-69f3506e0d90-kube-api-access-5j9sl\") pod \"98ea0694-2ad6-4e8f-ad31-69f3506e0d90\" (UID: \"98ea0694-2ad6-4e8f-ad31-69f3506e0d90\") " Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.159570 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ea0694-2ad6-4e8f-ad31-69f3506e0d90-config\") pod \"98ea0694-2ad6-4e8f-ad31-69f3506e0d90\" (UID: \"98ea0694-2ad6-4e8f-ad31-69f3506e0d90\") " Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.160314 4914 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.160345 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz9nf\" (UniqueName: \"kubernetes.io/projected/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-kube-api-access-hz9nf\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.160359 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cce5d04-a158-45e3-9512-2c97d2d9d4ff-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.160404 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98ea0694-2ad6-4e8f-ad31-69f3506e0d90-config" (OuterVolumeSpecName: "config") pod "98ea0694-2ad6-4e8f-ad31-69f3506e0d90" (UID: "98ea0694-2ad6-4e8f-ad31-69f3506e0d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.162085 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98ea0694-2ad6-4e8f-ad31-69f3506e0d90-kube-api-access-5j9sl" (OuterVolumeSpecName: "kube-api-access-5j9sl") pod "98ea0694-2ad6-4e8f-ad31-69f3506e0d90" (UID: "98ea0694-2ad6-4e8f-ad31-69f3506e0d90"). InnerVolumeSpecName "kube-api-access-5j9sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.264244 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j9sl\" (UniqueName: \"kubernetes.io/projected/98ea0694-2ad6-4e8f-ad31-69f3506e0d90-kube-api-access-5j9sl\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.264295 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ea0694-2ad6-4e8f-ad31-69f3506e0d90-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.288574 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-d2xdr"] Jan 27 14:03:12 crc kubenswrapper[4914]: W0127 14:03:12.381900 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc74db2f0_b0f5_420d_970a_ecebd81bff03.slice/crio-b12d18bc1d72bb1fe0c48c0263abda8433a38ae616a4333c0813efb34d1ee303 WatchSource:0}: Error finding container b12d18bc1d72bb1fe0c48c0263abda8433a38ae616a4333c0813efb34d1ee303: Status 404 returned error can't find the container with id b12d18bc1d72bb1fe0c48c0263abda8433a38ae616a4333c0813efb34d1ee303 Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.385989 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.616964 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d2xdr" event={"ID":"7668e140-246e-470b-8988-8d716fa6580b","Type":"ContainerStarted","Data":"5aa68a3fcca48da00c5901496964de093f75cc6ab637c2db0e20b266381f814f"} Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.618636 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-5srxf" event={"ID":"98ea0694-2ad6-4e8f-ad31-69f3506e0d90","Type":"ContainerDied","Data":"9da1b703298b65697e176b91c8303c65d38b42f0fba48219d9ce6ddce7adeec3"} Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.618713 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-5srxf" Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.621881 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fwgsp" event={"ID":"16d7aef1-746e-4166-a82d-e40371ebc96c","Type":"ContainerStarted","Data":"e033698f18d803f5122a99f31607b984537a6cf38bc07d8cc9c3b2577c167cc6"} Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.624688 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fe285074-4726-42c1-99cc-d99be63c1cbc","Type":"ContainerStarted","Data":"dd413a3a18124d5a33a8bac6ef43b1cace78cbde7beacb76987ab6d1d89dcfea"} Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.627540 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" event={"ID":"46aa6aa7-cacb-4442-9ff2-04962172adae","Type":"ContainerStarted","Data":"d7a8524e33e35c61125cf1b15c3d19ff911a119c14583a02a3deb5c95d1d7dc3"} Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.627707 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.629455 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c74db2f0-b0f5-420d-970a-ecebd81bff03","Type":"ContainerStarted","Data":"b12d18bc1d72bb1fe0c48c0263abda8433a38ae616a4333c0813efb34d1ee303"} Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.632023 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-vxpjb" event={"ID":"9cce5d04-a158-45e3-9512-2c97d2d9d4ff","Type":"ContainerDied","Data":"9baed2381725e71b75f537f507bf63c5038a41a179dd11ad2d6b6969ed75d429"} Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.632191 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-vxpjb" Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.660270 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-5srxf"] Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.666313 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-5srxf"] Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.722471 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-vxpjb"] Jan 27 14:03:12 crc kubenswrapper[4914]: I0127 14:03:12.732216 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-vxpjb"] Jan 27 14:03:13 crc kubenswrapper[4914]: I0127 14:03:13.642625 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" event={"ID":"6b092b45-66cf-4bf2-9612-c43f2bf7b8df","Type":"ContainerStarted","Data":"87068803020db9670f7ebcb53a7c87f39194b67e0bafed0d9fd0e9c70f6ef2b4"} Jan 27 14:03:13 crc kubenswrapper[4914]: I0127 14:03:13.642861 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" Jan 27 14:03:13 crc kubenswrapper[4914]: I0127 14:03:13.661419 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" podStartSLOduration=4.524108173 podStartE2EDuration="18.66140054s" podCreationTimestamp="2026-01-27 14:02:55 +0000 UTC" firstStartedPulling="2026-01-27 14:02:56.865919583 +0000 UTC m=+1135.178269668" lastFinishedPulling="2026-01-27 14:03:11.00321195 +0000 UTC m=+1149.315562035" observedRunningTime="2026-01-27 14:03:13.661119243 +0000 UTC m=+1151.973469338" watchObservedRunningTime="2026-01-27 14:03:13.66140054 +0000 UTC m=+1151.973750625" Jan 27 14:03:13 crc kubenswrapper[4914]: I0127 14:03:13.662112 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" podStartSLOduration=3.670827255 podStartE2EDuration="17.662105259s" podCreationTimestamp="2026-01-27 14:02:56 +0000 UTC" firstStartedPulling="2026-01-27 14:02:57.004699698 +0000 UTC m=+1135.317049783" lastFinishedPulling="2026-01-27 14:03:10.995977702 +0000 UTC m=+1149.308327787" observedRunningTime="2026-01-27 14:03:12.742589235 +0000 UTC m=+1151.054939320" watchObservedRunningTime="2026-01-27 14:03:13.662105259 +0000 UTC m=+1151.974455344" Jan 27 14:03:14 crc kubenswrapper[4914]: I0127 14:03:14.311089 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98ea0694-2ad6-4e8f-ad31-69f3506e0d90" path="/var/lib/kubelet/pods/98ea0694-2ad6-4e8f-ad31-69f3506e0d90/volumes" Jan 27 14:03:14 crc kubenswrapper[4914]: I0127 14:03:14.312260 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cce5d04-a158-45e3-9512-2c97d2d9d4ff" path="/var/lib/kubelet/pods/9cce5d04-a158-45e3-9512-2c97d2d9d4ff/volumes" Jan 27 14:03:16 crc kubenswrapper[4914]: I0127 14:03:16.713034 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" Jan 27 14:03:16 crc kubenswrapper[4914]: I0127 14:03:16.771447 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-xfjdd"] Jan 27 14:03:16 crc kubenswrapper[4914]: I0127 14:03:16.771724 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" podUID="6b092b45-66cf-4bf2-9612-c43f2bf7b8df" containerName="dnsmasq-dns" containerID="cri-o://87068803020db9670f7ebcb53a7c87f39194b67e0bafed0d9fd0e9c70f6ef2b4" gracePeriod=10 Jan 27 14:03:17 crc kubenswrapper[4914]: I0127 14:03:17.672016 4914 generic.go:334] "Generic (PLEG): container finished" podID="6b092b45-66cf-4bf2-9612-c43f2bf7b8df" containerID="87068803020db9670f7ebcb53a7c87f39194b67e0bafed0d9fd0e9c70f6ef2b4" exitCode=0 Jan 27 14:03:17 crc kubenswrapper[4914]: I0127 14:03:17.672081 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" event={"ID":"6b092b45-66cf-4bf2-9612-c43f2bf7b8df","Type":"ContainerDied","Data":"87068803020db9670f7ebcb53a7c87f39194b67e0bafed0d9fd0e9c70f6ef2b4"} Jan 27 14:03:21 crc kubenswrapper[4914]: I0127 14:03:21.429532 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" Jan 27 14:03:21 crc kubenswrapper[4914]: I0127 14:03:21.618958 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-dns-svc\") pod \"6b092b45-66cf-4bf2-9612-c43f2bf7b8df\" (UID: \"6b092b45-66cf-4bf2-9612-c43f2bf7b8df\") " Jan 27 14:03:21 crc kubenswrapper[4914]: I0127 14:03:21.619072 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mwmw\" (UniqueName: \"kubernetes.io/projected/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-kube-api-access-7mwmw\") pod \"6b092b45-66cf-4bf2-9612-c43f2bf7b8df\" (UID: \"6b092b45-66cf-4bf2-9612-c43f2bf7b8df\") " Jan 27 14:03:21 crc kubenswrapper[4914]: I0127 14:03:21.619107 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-config\") pod \"6b092b45-66cf-4bf2-9612-c43f2bf7b8df\" (UID: \"6b092b45-66cf-4bf2-9612-c43f2bf7b8df\") " Jan 27 14:03:21 crc kubenswrapper[4914]: I0127 14:03:21.622357 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-kube-api-access-7mwmw" (OuterVolumeSpecName: "kube-api-access-7mwmw") pod "6b092b45-66cf-4bf2-9612-c43f2bf7b8df" (UID: "6b092b45-66cf-4bf2-9612-c43f2bf7b8df"). InnerVolumeSpecName "kube-api-access-7mwmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:03:21 crc kubenswrapper[4914]: I0127 14:03:21.659184 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-config" (OuterVolumeSpecName: "config") pod "6b092b45-66cf-4bf2-9612-c43f2bf7b8df" (UID: "6b092b45-66cf-4bf2-9612-c43f2bf7b8df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:21 crc kubenswrapper[4914]: I0127 14:03:21.672442 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6b092b45-66cf-4bf2-9612-c43f2bf7b8df" (UID: "6b092b45-66cf-4bf2-9612-c43f2bf7b8df"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:21 crc kubenswrapper[4914]: I0127 14:03:21.699034 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" event={"ID":"6b092b45-66cf-4bf2-9612-c43f2bf7b8df","Type":"ContainerDied","Data":"bad9d52d59c4bb2dadd2613fbd09978b393d7fe1e98732fc6b12e639088c0f85"} Jan 27 14:03:21 crc kubenswrapper[4914]: I0127 14:03:21.699090 4914 scope.go:117] "RemoveContainer" containerID="87068803020db9670f7ebcb53a7c87f39194b67e0bafed0d9fd0e9c70f6ef2b4" Jan 27 14:03:21 crc kubenswrapper[4914]: I0127 14:03:21.699225 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" Jan 27 14:03:21 crc kubenswrapper[4914]: I0127 14:03:21.721448 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mwmw\" (UniqueName: \"kubernetes.io/projected/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-kube-api-access-7mwmw\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:21 crc kubenswrapper[4914]: I0127 14:03:21.721479 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:21 crc kubenswrapper[4914]: I0127 14:03:21.721489 4914 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b092b45-66cf-4bf2-9612-c43f2bf7b8df-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:21 crc kubenswrapper[4914]: I0127 14:03:21.740669 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-xfjdd"] Jan 27 14:03:21 crc kubenswrapper[4914]: I0127 14:03:21.752255 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-xfjdd"] Jan 27 14:03:21 crc kubenswrapper[4914]: I0127 14:03:21.981893 4914 scope.go:117] "RemoveContainer" containerID="793d977bc533c60536401e5d57d2b57e8e738f727578b0662eb99b0bf4e39573" Jan 27 14:03:22 crc kubenswrapper[4914]: I0127 14:03:22.310820 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b092b45-66cf-4bf2-9612-c43f2bf7b8df" path="/var/lib/kubelet/pods/6b092b45-66cf-4bf2-9612-c43f2bf7b8df/volumes" Jan 27 14:03:22 crc kubenswrapper[4914]: I0127 14:03:22.709316 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"541d0024-7ae1-4b5a-b139-35fe77463191","Type":"ContainerStarted","Data":"6c0b70ebff02a0b9e3f0b899bdc5d721db23305d4f0191be5d35d0b159de38ee"} Jan 27 14:03:22 crc kubenswrapper[4914]: I0127 14:03:22.710578 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fe285074-4726-42c1-99cc-d99be63c1cbc","Type":"ContainerStarted","Data":"d9b41b6e94a3c808362dc79fd508e26b8196cfbe19f0458afcc6812bdb150241"} Jan 27 14:03:23 crc kubenswrapper[4914]: I0127 14:03:23.721754 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c74db2f0-b0f5-420d-970a-ecebd81bff03","Type":"ContainerStarted","Data":"524f5f6fefc69c71acff436fb10e4cdaa27421281eb4cb98e722b3d558778bee"} Jan 27 14:03:23 crc kubenswrapper[4914]: I0127 14:03:23.722987 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ce960bf5-10e9-4b71-a092-a5b4013adbdf","Type":"ContainerStarted","Data":"b3a5cf85e29a31e6ee7131e4f62815b9d34ef61f29742e208a6e5045e737daff"} Jan 27 14:03:23 crc kubenswrapper[4914]: I0127 14:03:23.723165 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 14:03:23 crc kubenswrapper[4914]: I0127 14:03:23.724363 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fd387894-ddd7-4982-b8be-bb8bcea88486","Type":"ContainerStarted","Data":"6eed4e7bd6c114aae96f76b035b33c4887d37b17f8347da8f645d6f420b1562e"} Jan 27 14:03:23 crc kubenswrapper[4914]: I0127 14:03:23.727394 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9dc0242e-0a62-4f1c-b978-00f6b2651429","Type":"ContainerStarted","Data":"310f0d8ee69d8348287d42e688e3c34bcd22a0c014b71fa45e4908f7c3f9dc7b"} Jan 27 14:03:23 crc kubenswrapper[4914]: I0127 14:03:23.729038 4914 generic.go:334] "Generic (PLEG): container finished" podID="7668e140-246e-470b-8988-8d716fa6580b" containerID="eb4b5f6bf2ebd8a2a8c3c1d748ca808f687d020b704e88978c8b834f3bada423" exitCode=0 Jan 27 14:03:23 crc kubenswrapper[4914]: I0127 14:03:23.729105 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d2xdr" event={"ID":"7668e140-246e-470b-8988-8d716fa6580b","Type":"ContainerDied","Data":"eb4b5f6bf2ebd8a2a8c3c1d748ca808f687d020b704e88978c8b834f3bada423"} Jan 27 14:03:23 crc kubenswrapper[4914]: I0127 14:03:23.734664 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ead132f0-586e-402b-87bb-f7109396498d","Type":"ContainerStarted","Data":"60868d2a16744a7d8e12849ab0667e58062dc138a6494289bb33b1915cc1001f"} Jan 27 14:03:23 crc kubenswrapper[4914]: I0127 14:03:23.737921 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fwgsp" event={"ID":"16d7aef1-746e-4166-a82d-e40371ebc96c","Type":"ContainerStarted","Data":"c1fdaccb9a8ba768c92820a587718f71fe545683683b199b1d03b04195dc993e"} Jan 27 14:03:23 crc kubenswrapper[4914]: I0127 14:03:23.738060 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-fwgsp" Jan 27 14:03:23 crc kubenswrapper[4914]: I0127 14:03:23.743001 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=13.564013087 podStartE2EDuration="23.742981999s" podCreationTimestamp="2026-01-27 14:03:00 +0000 UTC" firstStartedPulling="2026-01-27 14:03:11.53091216 +0000 UTC m=+1149.843262245" lastFinishedPulling="2026-01-27 14:03:21.709881072 +0000 UTC m=+1160.022231157" observedRunningTime="2026-01-27 14:03:23.740745497 +0000 UTC m=+1162.053095582" watchObservedRunningTime="2026-01-27 14:03:23.742981999 +0000 UTC m=+1162.055332084" Jan 27 14:03:23 crc kubenswrapper[4914]: I0127 14:03:23.895001 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-fwgsp" podStartSLOduration=8.722622645 podStartE2EDuration="18.894982725s" podCreationTimestamp="2026-01-27 14:03:05 +0000 UTC" firstStartedPulling="2026-01-27 14:03:11.695388788 +0000 UTC m=+1150.007738873" lastFinishedPulling="2026-01-27 14:03:21.867748828 +0000 UTC m=+1160.180098953" observedRunningTime="2026-01-27 14:03:23.891988393 +0000 UTC m=+1162.204338498" watchObservedRunningTime="2026-01-27 14:03:23.894982725 +0000 UTC m=+1162.207332810" Jan 27 14:03:24 crc kubenswrapper[4914]: I0127 14:03:24.754568 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d2xdr" event={"ID":"7668e140-246e-470b-8988-8d716fa6580b","Type":"ContainerStarted","Data":"0b855cb7446f9c05be0c214b079377ce11ab20bbc3a7ab5802d37314db01fefd"} Jan 27 14:03:26 crc kubenswrapper[4914]: I0127 14:03:26.296503 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-744ffd65bc-xfjdd" podUID="6b092b45-66cf-4bf2-9612-c43f2bf7b8df" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.97:5353: i/o timeout" Jan 27 14:03:29 crc kubenswrapper[4914]: I0127 14:03:29.798488 4914 generic.go:334] "Generic (PLEG): container finished" podID="541d0024-7ae1-4b5a-b139-35fe77463191" containerID="6c0b70ebff02a0b9e3f0b899bdc5d721db23305d4f0191be5d35d0b159de38ee" exitCode=0 Jan 27 14:03:29 crc kubenswrapper[4914]: I0127 14:03:29.798586 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"541d0024-7ae1-4b5a-b139-35fe77463191","Type":"ContainerDied","Data":"6c0b70ebff02a0b9e3f0b899bdc5d721db23305d4f0191be5d35d0b159de38ee"} Jan 27 14:03:30 crc kubenswrapper[4914]: I0127 14:03:30.719618 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 14:03:30 crc kubenswrapper[4914]: I0127 14:03:30.808466 4914 generic.go:334] "Generic (PLEG): container finished" podID="fd387894-ddd7-4982-b8be-bb8bcea88486" containerID="6eed4e7bd6c114aae96f76b035b33c4887d37b17f8347da8f645d6f420b1562e" exitCode=0 Jan 27 14:03:30 crc kubenswrapper[4914]: I0127 14:03:30.808506 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fd387894-ddd7-4982-b8be-bb8bcea88486","Type":"ContainerDied","Data":"6eed4e7bd6c114aae96f76b035b33c4887d37b17f8347da8f645d6f420b1562e"} Jan 27 14:03:32 crc kubenswrapper[4914]: I0127 14:03:32.827660 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d2xdr" event={"ID":"7668e140-246e-470b-8988-8d716fa6580b","Type":"ContainerStarted","Data":"5a963ba1464656e56ea9161b2e9e4b3f661192aaa1b3aea546039ef0d2b2b51e"} Jan 27 14:03:32 crc kubenswrapper[4914]: I0127 14:03:32.828234 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:32 crc kubenswrapper[4914]: I0127 14:03:32.828250 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:32 crc kubenswrapper[4914]: I0127 14:03:32.833197 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fe285074-4726-42c1-99cc-d99be63c1cbc","Type":"ContainerStarted","Data":"d598472310d4b06eb83ff9490801eb23832c6d9c4624d9b5dc8c2bd2b5de19b7"} Jan 27 14:03:32 crc kubenswrapper[4914]: I0127 14:03:32.836059 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c74db2f0-b0f5-420d-970a-ecebd81bff03","Type":"ContainerStarted","Data":"d16396e30fde09030974747a7359e21dbab83f9747d03c0f0734b7872a5458da"} Jan 27 14:03:32 crc kubenswrapper[4914]: I0127 14:03:32.837294 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc356ffd-559b-403b-8f0f-8bb7518dd9b7","Type":"ContainerStarted","Data":"c83a94901e286d47329b27f22a945245dc9e3c483592f6ddb395b948506ae9d6"} Jan 27 14:03:32 crc kubenswrapper[4914]: I0127 14:03:32.837356 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 14:03:32 crc kubenswrapper[4914]: I0127 14:03:32.839480 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fd387894-ddd7-4982-b8be-bb8bcea88486","Type":"ContainerStarted","Data":"2a7e858349d169d3483757f92526fc039abb10c93ebd2966b9c86acbcccf5fcb"} Jan 27 14:03:32 crc kubenswrapper[4914]: I0127 14:03:32.849023 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"541d0024-7ae1-4b5a-b139-35fe77463191","Type":"ContainerStarted","Data":"7ba89c4d635a848caf157d937ea0a635095e047d011ae6ddaf9ed39fcf78e58e"} Jan 27 14:03:32 crc kubenswrapper[4914]: I0127 14:03:32.851951 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-d2xdr" podStartSLOduration=19.043386255 podStartE2EDuration="27.851933532s" podCreationTimestamp="2026-01-27 14:03:05 +0000 UTC" firstStartedPulling="2026-01-27 14:03:12.295317463 +0000 UTC m=+1150.607667548" lastFinishedPulling="2026-01-27 14:03:21.10386473 +0000 UTC m=+1159.416214825" observedRunningTime="2026-01-27 14:03:32.847964414 +0000 UTC m=+1171.160314509" watchObservedRunningTime="2026-01-27 14:03:32.851933532 +0000 UTC m=+1171.164283617" Jan 27 14:03:32 crc kubenswrapper[4914]: I0127 14:03:32.869988 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=25.574454398 podStartE2EDuration="35.869968237s" podCreationTimestamp="2026-01-27 14:02:57 +0000 UTC" firstStartedPulling="2026-01-27 14:03:11.254263375 +0000 UTC m=+1149.566613460" lastFinishedPulling="2026-01-27 14:03:21.549777214 +0000 UTC m=+1159.862127299" observedRunningTime="2026-01-27 14:03:32.867402017 +0000 UTC m=+1171.179752102" watchObservedRunningTime="2026-01-27 14:03:32.869968237 +0000 UTC m=+1171.182318342" Jan 27 14:03:32 crc kubenswrapper[4914]: I0127 14:03:32.889294 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.304515002 podStartE2EDuration="24.889277096s" podCreationTimestamp="2026-01-27 14:03:08 +0000 UTC" firstStartedPulling="2026-01-27 14:03:11.645904294 +0000 UTC m=+1149.958254379" lastFinishedPulling="2026-01-27 14:03:32.230666388 +0000 UTC m=+1170.543016473" observedRunningTime="2026-01-27 14:03:32.882750116 +0000 UTC m=+1171.195100211" watchObservedRunningTime="2026-01-27 14:03:32.889277096 +0000 UTC m=+1171.201627181" Jan 27 14:03:32 crc kubenswrapper[4914]: I0127 14:03:32.905646 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.061222494 podStartE2EDuration="27.905623793s" podCreationTimestamp="2026-01-27 14:03:05 +0000 UTC" firstStartedPulling="2026-01-27 14:03:12.38661151 +0000 UTC m=+1150.698961595" lastFinishedPulling="2026-01-27 14:03:32.231012809 +0000 UTC m=+1170.543362894" observedRunningTime="2026-01-27 14:03:32.899878365 +0000 UTC m=+1171.212228460" watchObservedRunningTime="2026-01-27 14:03:32.905623793 +0000 UTC m=+1171.217973878" Jan 27 14:03:32 crc kubenswrapper[4914]: I0127 14:03:32.919222 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.219844015 podStartE2EDuration="30.919195014s" podCreationTimestamp="2026-01-27 14:03:02 +0000 UTC" firstStartedPulling="2026-01-27 14:03:11.552164581 +0000 UTC m=+1149.864514656" lastFinishedPulling="2026-01-27 14:03:32.25151557 +0000 UTC m=+1170.563865655" observedRunningTime="2026-01-27 14:03:32.917278343 +0000 UTC m=+1171.229628458" watchObservedRunningTime="2026-01-27 14:03:32.919195014 +0000 UTC m=+1171.231545099" Jan 27 14:03:32 crc kubenswrapper[4914]: I0127 14:03:32.941616 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.465250344 podStartE2EDuration="33.941595168s" podCreationTimestamp="2026-01-27 14:02:59 +0000 UTC" firstStartedPulling="2026-01-27 14:03:11.571392207 +0000 UTC m=+1149.883742282" lastFinishedPulling="2026-01-27 14:03:22.047737021 +0000 UTC m=+1160.360087106" observedRunningTime="2026-01-27 14:03:32.935695957 +0000 UTC m=+1171.248046052" watchObservedRunningTime="2026-01-27 14:03:32.941595168 +0000 UTC m=+1171.253945273" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.117261 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-zr4cj"] Jan 27 14:03:33 crc kubenswrapper[4914]: E0127 14:03:33.117671 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b092b45-66cf-4bf2-9612-c43f2bf7b8df" containerName="dnsmasq-dns" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.117697 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b092b45-66cf-4bf2-9612-c43f2bf7b8df" containerName="dnsmasq-dns" Jan 27 14:03:33 crc kubenswrapper[4914]: E0127 14:03:33.117723 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b092b45-66cf-4bf2-9612-c43f2bf7b8df" containerName="init" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.117732 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b092b45-66cf-4bf2-9612-c43f2bf7b8df" containerName="init" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.117945 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b092b45-66cf-4bf2-9612-c43f2bf7b8df" containerName="dnsmasq-dns" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.118997 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.151079 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-zr4cj"] Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.220524 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-zr4cj\" (UID: \"c2de2f0f-8048-4eee-8baf-991ad19a4ffa\") " pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.221033 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-config\") pod \"dnsmasq-dns-7f9f9f545f-zr4cj\" (UID: \"c2de2f0f-8048-4eee-8baf-991ad19a4ffa\") " pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.221234 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qpnb\" (UniqueName: \"kubernetes.io/projected/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-kube-api-access-2qpnb\") pod \"dnsmasq-dns-7f9f9f545f-zr4cj\" (UID: \"c2de2f0f-8048-4eee-8baf-991ad19a4ffa\") " pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.322999 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-config\") pod \"dnsmasq-dns-7f9f9f545f-zr4cj\" (UID: \"c2de2f0f-8048-4eee-8baf-991ad19a4ffa\") " pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.323099 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qpnb\" (UniqueName: \"kubernetes.io/projected/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-kube-api-access-2qpnb\") pod \"dnsmasq-dns-7f9f9f545f-zr4cj\" (UID: \"c2de2f0f-8048-4eee-8baf-991ad19a4ffa\") " pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.324007 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-zr4cj\" (UID: \"c2de2f0f-8048-4eee-8baf-991ad19a4ffa\") " pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.324247 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-config\") pod \"dnsmasq-dns-7f9f9f545f-zr4cj\" (UID: \"c2de2f0f-8048-4eee-8baf-991ad19a4ffa\") " pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.324603 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-zr4cj\" (UID: \"c2de2f0f-8048-4eee-8baf-991ad19a4ffa\") " pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.342068 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qpnb\" (UniqueName: \"kubernetes.io/projected/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-kube-api-access-2qpnb\") pod \"dnsmasq-dns-7f9f9f545f-zr4cj\" (UID: \"c2de2f0f-8048-4eee-8baf-991ad19a4ffa\") " pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.432638 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.447111 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.476178 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.857479 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.862823 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.894854 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-zr4cj"] Jan 27 14:03:33 crc kubenswrapper[4914]: W0127 14:03:33.903538 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2de2f0f_8048_4eee_8baf_991ad19a4ffa.slice/crio-cd5374c9a53a05df241c976687017e542ea9b241c8b393c2bcd28e0c9656d376 WatchSource:0}: Error finding container cd5374c9a53a05df241c976687017e542ea9b241c8b393c2bcd28e0c9656d376: Status 404 returned error can't find the container with id cd5374c9a53a05df241c976687017e542ea9b241c8b393c2bcd28e0c9656d376 Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.910898 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 14:03:33 crc kubenswrapper[4914]: I0127 14:03:33.925646 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.175542 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-zr4cj"] Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.201024 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8555945b55-xnczx"] Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.202317 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8555945b55-xnczx" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.211123 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.223038 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8555945b55-xnczx"] Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.239427 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-config\") pod \"dnsmasq-dns-8555945b55-xnczx\" (UID: \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\") " pod="openstack/dnsmasq-dns-8555945b55-xnczx" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.239514 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-dns-svc\") pod \"dnsmasq-dns-8555945b55-xnczx\" (UID: \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\") " pod="openstack/dnsmasq-dns-8555945b55-xnczx" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.239603 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-ovsdbserver-sb\") pod \"dnsmasq-dns-8555945b55-xnczx\" (UID: \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\") " pod="openstack/dnsmasq-dns-8555945b55-xnczx" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.239652 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs9rl\" (UniqueName: \"kubernetes.io/projected/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-kube-api-access-zs9rl\") pod \"dnsmasq-dns-8555945b55-xnczx\" (UID: \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\") " pod="openstack/dnsmasq-dns-8555945b55-xnczx" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.239854 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-4r9r5"] Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.241037 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.244033 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.262005 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4r9r5"] Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.324329 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.335089 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.337411 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.337646 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.337791 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.337855 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.339168 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-hw5br" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.342025 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-ovsdbserver-sb\") pod \"dnsmasq-dns-8555945b55-xnczx\" (UID: \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\") " pod="openstack/dnsmasq-dns-8555945b55-xnczx" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.342146 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs9rl\" (UniqueName: \"kubernetes.io/projected/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-kube-api-access-zs9rl\") pod \"dnsmasq-dns-8555945b55-xnczx\" (UID: \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\") " pod="openstack/dnsmasq-dns-8555945b55-xnczx" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.342280 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-config\") pod \"dnsmasq-dns-8555945b55-xnczx\" (UID: \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\") " pod="openstack/dnsmasq-dns-8555945b55-xnczx" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.342405 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-dns-svc\") pod \"dnsmasq-dns-8555945b55-xnczx\" (UID: \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\") " pod="openstack/dnsmasq-dns-8555945b55-xnczx" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.343198 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-dns-svc\") pod \"dnsmasq-dns-8555945b55-xnczx\" (UID: \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\") " pod="openstack/dnsmasq-dns-8555945b55-xnczx" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.343601 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-ovsdbserver-sb\") pod \"dnsmasq-dns-8555945b55-xnczx\" (UID: \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\") " pod="openstack/dnsmasq-dns-8555945b55-xnczx" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.344061 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-config\") pod \"dnsmasq-dns-8555945b55-xnczx\" (UID: \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\") " pod="openstack/dnsmasq-dns-8555945b55-xnczx" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.366745 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs9rl\" (UniqueName: \"kubernetes.io/projected/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-kube-api-access-zs9rl\") pod \"dnsmasq-dns-8555945b55-xnczx\" (UID: \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\") " pod="openstack/dnsmasq-dns-8555945b55-xnczx" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.446089 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/97163637-6474-4c5a-b153-113d64e8c07f-ovs-rundir\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.446140 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97163637-6474-4c5a-b153-113d64e8c07f-combined-ca-bundle\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.446176 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-cache\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.446197 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzp9h\" (UniqueName: \"kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-kube-api-access-dzp9h\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.446228 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/97163637-6474-4c5a-b153-113d64e8c07f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.446264 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.446309 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.446331 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/97163637-6474-4c5a-b153-113d64e8c07f-ovn-rundir\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.446408 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7x2q\" (UniqueName: \"kubernetes.io/projected/97163637-6474-4c5a-b153-113d64e8c07f-kube-api-access-f7x2q\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.446436 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97163637-6474-4c5a-b153-113d64e8c07f-config\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.446475 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-lock\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.446507 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.483460 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8555945b55-xnczx"] Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.484390 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8555945b55-xnczx" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.504394 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-dxsh7"] Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.506271 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.514406 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.534066 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-dxsh7"] Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.547467 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7x2q\" (UniqueName: \"kubernetes.io/projected/97163637-6474-4c5a-b153-113d64e8c07f-kube-api-access-f7x2q\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.547516 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97163637-6474-4c5a-b153-113d64e8c07f-config\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.547550 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-lock\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.547575 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.547595 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/97163637-6474-4c5a-b153-113d64e8c07f-ovs-rundir\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.547614 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97163637-6474-4c5a-b153-113d64e8c07f-combined-ca-bundle\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.547631 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-cache\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.547646 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzp9h\" (UniqueName: \"kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-kube-api-access-dzp9h\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.547668 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/97163637-6474-4c5a-b153-113d64e8c07f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.547699 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.547741 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.547763 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/97163637-6474-4c5a-b153-113d64e8c07f-ovn-rundir\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.548211 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/97163637-6474-4c5a-b153-113d64e8c07f-ovn-rundir\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: E0127 14:03:34.548688 4914 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:03:34 crc kubenswrapper[4914]: E0127 14:03:34.548730 4914 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:03:34 crc kubenswrapper[4914]: E0127 14:03:34.548786 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift podName:62cc5d9e-afad-4888-9e8f-c57f7b185d2b nodeName:}" failed. No retries permitted until 2026-01-27 14:03:35.048768242 +0000 UTC m=+1173.361118317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift") pod "swift-storage-0" (UID: "62cc5d9e-afad-4888-9e8f-c57f7b185d2b") : configmap "swift-ring-files" not found Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.548892 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-lock\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.548924 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/97163637-6474-4c5a-b153-113d64e8c07f-ovs-rundir\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.549193 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-cache\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.549304 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97163637-6474-4c5a-b153-113d64e8c07f-config\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.550307 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.554596 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/97163637-6474-4c5a-b153-113d64e8c07f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.554690 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.563845 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97163637-6474-4c5a-b153-113d64e8c07f-combined-ca-bundle\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.567483 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzp9h\" (UniqueName: \"kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-kube-api-access-dzp9h\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.567587 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7x2q\" (UniqueName: \"kubernetes.io/projected/97163637-6474-4c5a-b153-113d64e8c07f-kube-api-access-f7x2q\") pod \"ovn-controller-metrics-4r9r5\" (UID: \"97163637-6474-4c5a-b153-113d64e8c07f\") " pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.578624 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.649064 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-config\") pod \"dnsmasq-dns-6cb545bd4c-dxsh7\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.649194 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-dxsh7\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.649255 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-dxsh7\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.649309 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f96ml\" (UniqueName: \"kubernetes.io/projected/66a27657-35f7-4e4e-a754-cb7baffffa74-kube-api-access-f96ml\") pod \"dnsmasq-dns-6cb545bd4c-dxsh7\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.649334 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-dxsh7\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.751183 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-dxsh7\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.751279 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-dxsh7\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.751333 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f96ml\" (UniqueName: \"kubernetes.io/projected/66a27657-35f7-4e4e-a754-cb7baffffa74-kube-api-access-f96ml\") pod \"dnsmasq-dns-6cb545bd4c-dxsh7\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.751360 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-dxsh7\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.751791 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-config\") pod \"dnsmasq-dns-6cb545bd4c-dxsh7\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.752243 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-dxsh7\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.752431 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-dxsh7\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.752672 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-config\") pod \"dnsmasq-dns-6cb545bd4c-dxsh7\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.753192 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-dxsh7\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.776425 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f96ml\" (UniqueName: \"kubernetes.io/projected/66a27657-35f7-4e4e-a754-cb7baffffa74-kube-api-access-f96ml\") pod \"dnsmasq-dns-6cb545bd4c-dxsh7\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.849118 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.862401 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4r9r5" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.864584 4914 generic.go:334] "Generic (PLEG): container finished" podID="c2de2f0f-8048-4eee-8baf-991ad19a4ffa" containerID="3408f1d36f84ede6d11d0cb40832f52975ce9baad273d82b63dc01704b072cb7" exitCode=0 Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.865510 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" event={"ID":"c2de2f0f-8048-4eee-8baf-991ad19a4ffa","Type":"ContainerDied","Data":"3408f1d36f84ede6d11d0cb40832f52975ce9baad273d82b63dc01704b072cb7"} Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.865542 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" event={"ID":"c2de2f0f-8048-4eee-8baf-991ad19a4ffa","Type":"ContainerStarted","Data":"cd5374c9a53a05df241c976687017e542ea9b241c8b393c2bcd28e0c9656d376"} Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.866004 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.926337 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-4f8dg"] Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.927536 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.934785 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.936422 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.936963 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.952101 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.954964 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4f8dg"] Jan 27 14:03:34 crc kubenswrapper[4914]: I0127 14:03:34.981698 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8555945b55-xnczx"] Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.058256 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.059046 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-dispersionconf\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.059093 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0d441d11-3241-45da-8bcf-c95636d3efa9-etc-swift\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.059133 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0d441d11-3241-45da-8bcf-c95636d3efa9-ring-data-devices\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.059173 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d441d11-3241-45da-8bcf-c95636d3efa9-scripts\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.059206 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-combined-ca-bundle\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.059250 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbzql\" (UniqueName: \"kubernetes.io/projected/0d441d11-3241-45da-8bcf-c95636d3efa9-kube-api-access-gbzql\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: E0127 14:03:35.058986 4914 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:03:35 crc kubenswrapper[4914]: E0127 14:03:35.059335 4914 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:03:35 crc kubenswrapper[4914]: E0127 14:03:35.059502 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift podName:62cc5d9e-afad-4888-9e8f-c57f7b185d2b nodeName:}" failed. No retries permitted until 2026-01-27 14:03:36.059486388 +0000 UTC m=+1174.371836473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift") pod "swift-storage-0" (UID: "62cc5d9e-afad-4888-9e8f-c57f7b185d2b") : configmap "swift-ring-files" not found Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.061554 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-swiftconf\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.183281 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-swiftconf\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.183577 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-dispersionconf\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.183625 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0d441d11-3241-45da-8bcf-c95636d3efa9-etc-swift\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.183728 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0d441d11-3241-45da-8bcf-c95636d3efa9-ring-data-devices\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.183817 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d441d11-3241-45da-8bcf-c95636d3efa9-scripts\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.183903 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-combined-ca-bundle\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.183994 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbzql\" (UniqueName: \"kubernetes.io/projected/0d441d11-3241-45da-8bcf-c95636d3efa9-kube-api-access-gbzql\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.185590 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0d441d11-3241-45da-8bcf-c95636d3efa9-ring-data-devices\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.185903 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0d441d11-3241-45da-8bcf-c95636d3efa9-etc-swift\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.186240 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d441d11-3241-45da-8bcf-c95636d3efa9-scripts\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.198263 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-dispersionconf\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.200429 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-swiftconf\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.212539 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-combined-ca-bundle\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.212618 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.214018 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.217181 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbzql\" (UniqueName: \"kubernetes.io/projected/0d441d11-3241-45da-8bcf-c95636d3efa9-kube-api-access-gbzql\") pod \"swift-ring-rebalance-4f8dg\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.219674 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.219904 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.220038 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.220124 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jqhr7" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.238699 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.388936 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3824e689-7118-49e0-b61e-da16b54872ca-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.389609 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3824e689-7118-49e0-b61e-da16b54872ca-config\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.389644 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3824e689-7118-49e0-b61e-da16b54872ca-scripts\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.389679 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3824e689-7118-49e0-b61e-da16b54872ca-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.390014 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhllt\" (UniqueName: \"kubernetes.io/projected/3824e689-7118-49e0-b61e-da16b54872ca-kube-api-access-qhllt\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.390448 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3824e689-7118-49e0-b61e-da16b54872ca-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.390567 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3824e689-7118-49e0-b61e-da16b54872ca-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.434694 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-dxsh7"] Jan 27 14:03:35 crc kubenswrapper[4914]: W0127 14:03:35.441032 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66a27657_35f7_4e4e_a754_cb7baffffa74.slice/crio-1c1a6a674c0cf82a8838f94c6e723d223407ffdf20303a211f9dd71dc9b05197 WatchSource:0}: Error finding container 1c1a6a674c0cf82a8838f94c6e723d223407ffdf20303a211f9dd71dc9b05197: Status 404 returned error can't find the container with id 1c1a6a674c0cf82a8838f94c6e723d223407ffdf20303a211f9dd71dc9b05197 Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.445309 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.460950 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4r9r5"] Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.492042 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3824e689-7118-49e0-b61e-da16b54872ca-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.492090 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3824e689-7118-49e0-b61e-da16b54872ca-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.492114 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3824e689-7118-49e0-b61e-da16b54872ca-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.492167 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3824e689-7118-49e0-b61e-da16b54872ca-config\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.492192 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3824e689-7118-49e0-b61e-da16b54872ca-scripts\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.492211 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3824e689-7118-49e0-b61e-da16b54872ca-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.492248 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhllt\" (UniqueName: \"kubernetes.io/projected/3824e689-7118-49e0-b61e-da16b54872ca-kube-api-access-qhllt\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.493033 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3824e689-7118-49e0-b61e-da16b54872ca-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.493457 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3824e689-7118-49e0-b61e-da16b54872ca-config\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.493497 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3824e689-7118-49e0-b61e-da16b54872ca-scripts\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.496842 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3824e689-7118-49e0-b61e-da16b54872ca-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.497367 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3824e689-7118-49e0-b61e-da16b54872ca-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.497579 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3824e689-7118-49e0-b61e-da16b54872ca-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.512285 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhllt\" (UniqueName: \"kubernetes.io/projected/3824e689-7118-49e0-b61e-da16b54872ca-kube-api-access-qhllt\") pod \"ovn-northd-0\" (UID: \"3824e689-7118-49e0-b61e-da16b54872ca\") " pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.589749 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.875772 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4r9r5" event={"ID":"97163637-6474-4c5a-b153-113d64e8c07f","Type":"ContainerStarted","Data":"98b72279fc04e57707ba17ec243637e16f18400ef9033ec7b3d9f8ca2b500649"} Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.875852 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4r9r5" event={"ID":"97163637-6474-4c5a-b153-113d64e8c07f","Type":"ContainerStarted","Data":"ec25688e893c3ca330ae1500f23568b2db252bd8acdab7c4c78df52b208de648"} Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.877717 4914 generic.go:334] "Generic (PLEG): container finished" podID="d2d90dd2-f40d-4f5d-89a0-6008bebe15b0" containerID="f4ffbd4c8ea014188f8d636461b3d9959cb6b48701a9dc2f140f4538c8cf3bb6" exitCode=0 Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.877809 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8555945b55-xnczx" event={"ID":"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0","Type":"ContainerDied","Data":"f4ffbd4c8ea014188f8d636461b3d9959cb6b48701a9dc2f140f4538c8cf3bb6"} Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.877861 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8555945b55-xnczx" event={"ID":"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0","Type":"ContainerStarted","Data":"8a87e801460d824d6f25f325e4f95e0d059ec5943e3ad10bd57a81d703f35091"} Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.880536 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" event={"ID":"c2de2f0f-8048-4eee-8baf-991ad19a4ffa","Type":"ContainerStarted","Data":"e1df10c6a73e9d241f4764d766b49e265bd32825b1dca9adf79bd02211b46774"} Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.880630 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.880638 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" podUID="c2de2f0f-8048-4eee-8baf-991ad19a4ffa" containerName="dnsmasq-dns" containerID="cri-o://e1df10c6a73e9d241f4764d766b49e265bd32825b1dca9adf79bd02211b46774" gracePeriod=10 Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.885352 4914 generic.go:334] "Generic (PLEG): container finished" podID="66a27657-35f7-4e4e-a754-cb7baffffa74" containerID="caf4749ab42d2a3e53332b78e653e5bab8ed25a3d187aa061cf8b84c537f55cc" exitCode=0 Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.885447 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" event={"ID":"66a27657-35f7-4e4e-a754-cb7baffffa74","Type":"ContainerDied","Data":"caf4749ab42d2a3e53332b78e653e5bab8ed25a3d187aa061cf8b84c537f55cc"} Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.885492 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" event={"ID":"66a27657-35f7-4e4e-a754-cb7baffffa74","Type":"ContainerStarted","Data":"1c1a6a674c0cf82a8838f94c6e723d223407ffdf20303a211f9dd71dc9b05197"} Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.891812 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-4r9r5" podStartSLOduration=1.891792981 podStartE2EDuration="1.891792981s" podCreationTimestamp="2026-01-27 14:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:03:35.889771387 +0000 UTC m=+1174.202121482" watchObservedRunningTime="2026-01-27 14:03:35.891792981 +0000 UTC m=+1174.204143066" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.926241 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" podStartSLOduration=2.926194914 podStartE2EDuration="2.926194914s" podCreationTimestamp="2026-01-27 14:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:03:35.918328108 +0000 UTC m=+1174.230678193" watchObservedRunningTime="2026-01-27 14:03:35.926194914 +0000 UTC m=+1174.238544999" Jan 27 14:03:35 crc kubenswrapper[4914]: I0127 14:03:35.954314 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4f8dg"] Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.073025 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 14:03:36 crc kubenswrapper[4914]: E0127 14:03:36.107715 4914 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:03:36 crc kubenswrapper[4914]: E0127 14:03:36.107767 4914 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:03:36 crc kubenswrapper[4914]: E0127 14:03:36.107853 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift podName:62cc5d9e-afad-4888-9e8f-c57f7b185d2b nodeName:}" failed. No retries permitted until 2026-01-27 14:03:38.107801277 +0000 UTC m=+1176.420151362 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift") pod "swift-storage-0" (UID: "62cc5d9e-afad-4888-9e8f-c57f7b185d2b") : configmap "swift-ring-files" not found Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.107566 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.363154 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8555945b55-xnczx" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.397570 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.513492 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-dns-svc\") pod \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\" (UID: \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\") " Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.513980 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-dns-svc\") pod \"c2de2f0f-8048-4eee-8baf-991ad19a4ffa\" (UID: \"c2de2f0f-8048-4eee-8baf-991ad19a4ffa\") " Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.514019 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-config\") pod \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\" (UID: \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\") " Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.514094 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-ovsdbserver-sb\") pod \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\" (UID: \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\") " Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.514246 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs9rl\" (UniqueName: \"kubernetes.io/projected/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-kube-api-access-zs9rl\") pod \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\" (UID: \"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0\") " Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.514273 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-config\") pod \"c2de2f0f-8048-4eee-8baf-991ad19a4ffa\" (UID: \"c2de2f0f-8048-4eee-8baf-991ad19a4ffa\") " Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.514317 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qpnb\" (UniqueName: \"kubernetes.io/projected/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-kube-api-access-2qpnb\") pod \"c2de2f0f-8048-4eee-8baf-991ad19a4ffa\" (UID: \"c2de2f0f-8048-4eee-8baf-991ad19a4ffa\") " Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.526289 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-kube-api-access-2qpnb" (OuterVolumeSpecName: "kube-api-access-2qpnb") pod "c2de2f0f-8048-4eee-8baf-991ad19a4ffa" (UID: "c2de2f0f-8048-4eee-8baf-991ad19a4ffa"). InnerVolumeSpecName "kube-api-access-2qpnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.526439 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-kube-api-access-zs9rl" (OuterVolumeSpecName: "kube-api-access-zs9rl") pod "d2d90dd2-f40d-4f5d-89a0-6008bebe15b0" (UID: "d2d90dd2-f40d-4f5d-89a0-6008bebe15b0"). InnerVolumeSpecName "kube-api-access-zs9rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.536412 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2d90dd2-f40d-4f5d-89a0-6008bebe15b0" (UID: "d2d90dd2-f40d-4f5d-89a0-6008bebe15b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.536421 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-config" (OuterVolumeSpecName: "config") pod "d2d90dd2-f40d-4f5d-89a0-6008bebe15b0" (UID: "d2d90dd2-f40d-4f5d-89a0-6008bebe15b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.549588 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2d90dd2-f40d-4f5d-89a0-6008bebe15b0" (UID: "d2d90dd2-f40d-4f5d-89a0-6008bebe15b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.556351 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2de2f0f-8048-4eee-8baf-991ad19a4ffa" (UID: "c2de2f0f-8048-4eee-8baf-991ad19a4ffa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.564603 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-config" (OuterVolumeSpecName: "config") pod "c2de2f0f-8048-4eee-8baf-991ad19a4ffa" (UID: "c2de2f0f-8048-4eee-8baf-991ad19a4ffa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.616042 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qpnb\" (UniqueName: \"kubernetes.io/projected/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-kube-api-access-2qpnb\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.616077 4914 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.616086 4914 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.616096 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.616107 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.616117 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs9rl\" (UniqueName: \"kubernetes.io/projected/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0-kube-api-access-zs9rl\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.616126 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2de2f0f-8048-4eee-8baf-991ad19a4ffa-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.901650 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8555945b55-xnczx" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.901643 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8555945b55-xnczx" event={"ID":"d2d90dd2-f40d-4f5d-89a0-6008bebe15b0","Type":"ContainerDied","Data":"8a87e801460d824d6f25f325e4f95e0d059ec5943e3ad10bd57a81d703f35091"} Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.902985 4914 scope.go:117] "RemoveContainer" containerID="f4ffbd4c8ea014188f8d636461b3d9959cb6b48701a9dc2f140f4538c8cf3bb6" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.907946 4914 generic.go:334] "Generic (PLEG): container finished" podID="c2de2f0f-8048-4eee-8baf-991ad19a4ffa" containerID="e1df10c6a73e9d241f4764d766b49e265bd32825b1dca9adf79bd02211b46774" exitCode=0 Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.908096 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.908118 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" event={"ID":"c2de2f0f-8048-4eee-8baf-991ad19a4ffa","Type":"ContainerDied","Data":"e1df10c6a73e9d241f4764d766b49e265bd32825b1dca9adf79bd02211b46774"} Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.908162 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-zr4cj" event={"ID":"c2de2f0f-8048-4eee-8baf-991ad19a4ffa","Type":"ContainerDied","Data":"cd5374c9a53a05df241c976687017e542ea9b241c8b393c2bcd28e0c9656d376"} Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.909397 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3824e689-7118-49e0-b61e-da16b54872ca","Type":"ContainerStarted","Data":"8a434ee64fd68c4c42c33192a51d901f1decfd572544c8ef1eccede7b4a3fc0c"} Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.912268 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" event={"ID":"66a27657-35f7-4e4e-a754-cb7baffffa74","Type":"ContainerStarted","Data":"df2de16efa930b432c6618f4a734d0fbbf7dd15875b8d3ffe2a95a8c250e03c7"} Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.913259 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.915103 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4f8dg" event={"ID":"0d441d11-3241-45da-8bcf-c95636d3efa9","Type":"ContainerStarted","Data":"15435b9ffad8aa133e4fccf86678c7dbba45ea69be86758edf2ab07e0e601df4"} Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.932135 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" podStartSLOduration=2.9321169620000003 podStartE2EDuration="2.932116962s" podCreationTimestamp="2026-01-27 14:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:03:36.92951243 +0000 UTC m=+1175.241862525" watchObservedRunningTime="2026-01-27 14:03:36.932116962 +0000 UTC m=+1175.244467047" Jan 27 14:03:36 crc kubenswrapper[4914]: I0127 14:03:36.975637 4914 scope.go:117] "RemoveContainer" containerID="e1df10c6a73e9d241f4764d766b49e265bd32825b1dca9adf79bd02211b46774" Jan 27 14:03:37 crc kubenswrapper[4914]: I0127 14:03:37.015113 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-zr4cj"] Jan 27 14:03:37 crc kubenswrapper[4914]: I0127 14:03:37.058522 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-zr4cj"] Jan 27 14:03:37 crc kubenswrapper[4914]: I0127 14:03:37.072722 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8555945b55-xnczx"] Jan 27 14:03:37 crc kubenswrapper[4914]: I0127 14:03:37.079804 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8555945b55-xnczx"] Jan 27 14:03:37 crc kubenswrapper[4914]: I0127 14:03:37.276655 4914 scope.go:117] "RemoveContainer" containerID="3408f1d36f84ede6d11d0cb40832f52975ce9baad273d82b63dc01704b072cb7" Jan 27 14:03:37 crc kubenswrapper[4914]: I0127 14:03:37.323351 4914 scope.go:117] "RemoveContainer" containerID="e1df10c6a73e9d241f4764d766b49e265bd32825b1dca9adf79bd02211b46774" Jan 27 14:03:37 crc kubenswrapper[4914]: E0127 14:03:37.323821 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1df10c6a73e9d241f4764d766b49e265bd32825b1dca9adf79bd02211b46774\": container with ID starting with e1df10c6a73e9d241f4764d766b49e265bd32825b1dca9adf79bd02211b46774 not found: ID does not exist" containerID="e1df10c6a73e9d241f4764d766b49e265bd32825b1dca9adf79bd02211b46774" Jan 27 14:03:37 crc kubenswrapper[4914]: I0127 14:03:37.323871 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1df10c6a73e9d241f4764d766b49e265bd32825b1dca9adf79bd02211b46774"} err="failed to get container status \"e1df10c6a73e9d241f4764d766b49e265bd32825b1dca9adf79bd02211b46774\": rpc error: code = NotFound desc = could not find container \"e1df10c6a73e9d241f4764d766b49e265bd32825b1dca9adf79bd02211b46774\": container with ID starting with e1df10c6a73e9d241f4764d766b49e265bd32825b1dca9adf79bd02211b46774 not found: ID does not exist" Jan 27 14:03:37 crc kubenswrapper[4914]: I0127 14:03:37.323894 4914 scope.go:117] "RemoveContainer" containerID="3408f1d36f84ede6d11d0cb40832f52975ce9baad273d82b63dc01704b072cb7" Jan 27 14:03:37 crc kubenswrapper[4914]: E0127 14:03:37.324142 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3408f1d36f84ede6d11d0cb40832f52975ce9baad273d82b63dc01704b072cb7\": container with ID starting with 3408f1d36f84ede6d11d0cb40832f52975ce9baad273d82b63dc01704b072cb7 not found: ID does not exist" containerID="3408f1d36f84ede6d11d0cb40832f52975ce9baad273d82b63dc01704b072cb7" Jan 27 14:03:37 crc kubenswrapper[4914]: I0127 14:03:37.324159 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3408f1d36f84ede6d11d0cb40832f52975ce9baad273d82b63dc01704b072cb7"} err="failed to get container status \"3408f1d36f84ede6d11d0cb40832f52975ce9baad273d82b63dc01704b072cb7\": rpc error: code = NotFound desc = could not find container \"3408f1d36f84ede6d11d0cb40832f52975ce9baad273d82b63dc01704b072cb7\": container with ID starting with 3408f1d36f84ede6d11d0cb40832f52975ce9baad273d82b63dc01704b072cb7 not found: ID does not exist" Jan 27 14:03:38 crc kubenswrapper[4914]: I0127 14:03:38.153745 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:38 crc kubenswrapper[4914]: E0127 14:03:38.153969 4914 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:03:38 crc kubenswrapper[4914]: E0127 14:03:38.154161 4914 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:03:38 crc kubenswrapper[4914]: E0127 14:03:38.154225 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift podName:62cc5d9e-afad-4888-9e8f-c57f7b185d2b nodeName:}" failed. No retries permitted until 2026-01-27 14:03:42.154203169 +0000 UTC m=+1180.466553254 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift") pod "swift-storage-0" (UID: "62cc5d9e-afad-4888-9e8f-c57f7b185d2b") : configmap "swift-ring-files" not found Jan 27 14:03:38 crc kubenswrapper[4914]: I0127 14:03:38.302935 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2de2f0f-8048-4eee-8baf-991ad19a4ffa" path="/var/lib/kubelet/pods/c2de2f0f-8048-4eee-8baf-991ad19a4ffa/volumes" Jan 27 14:03:38 crc kubenswrapper[4914]: I0127 14:03:38.303676 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2d90dd2-f40d-4f5d-89a0-6008bebe15b0" path="/var/lib/kubelet/pods/d2d90dd2-f40d-4f5d-89a0-6008bebe15b0/volumes" Jan 27 14:03:39 crc kubenswrapper[4914]: I0127 14:03:39.160472 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 14:03:39 crc kubenswrapper[4914]: I0127 14:03:39.160791 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 14:03:39 crc kubenswrapper[4914]: I0127 14:03:39.232601 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 14:03:39 crc kubenswrapper[4914]: I0127 14:03:39.940369 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4f8dg" event={"ID":"0d441d11-3241-45da-8bcf-c95636d3efa9","Type":"ContainerStarted","Data":"5c4dfa95545a84a0fd222a9275bc69b3f002f05ea1b8b4822b79a9ed7b5a6555"} Jan 27 14:03:39 crc kubenswrapper[4914]: I0127 14:03:39.943012 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3824e689-7118-49e0-b61e-da16b54872ca","Type":"ContainerStarted","Data":"4fab5cf2cf18c350a0896acaee3af720eb737053fb12b4c19b000a18a0374ae2"} Jan 27 14:03:39 crc kubenswrapper[4914]: I0127 14:03:39.943066 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3824e689-7118-49e0-b61e-da16b54872ca","Type":"ContainerStarted","Data":"7033fe73117b2778ce1757cd700a29addab49ddcab246d91b644396ede35a51a"} Jan 27 14:03:39 crc kubenswrapper[4914]: I0127 14:03:39.943319 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 14:03:39 crc kubenswrapper[4914]: I0127 14:03:39.955615 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-4f8dg" podStartSLOduration=2.444436396 podStartE2EDuration="5.955597142s" podCreationTimestamp="2026-01-27 14:03:34 +0000 UTC" firstStartedPulling="2026-01-27 14:03:35.977596001 +0000 UTC m=+1174.289946086" lastFinishedPulling="2026-01-27 14:03:39.488756747 +0000 UTC m=+1177.801106832" observedRunningTime="2026-01-27 14:03:39.9555204 +0000 UTC m=+1178.267870495" watchObservedRunningTime="2026-01-27 14:03:39.955597142 +0000 UTC m=+1178.267947227" Jan 27 14:03:39 crc kubenswrapper[4914]: I0127 14:03:39.980191 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.750794258 podStartE2EDuration="4.980170155s" podCreationTimestamp="2026-01-27 14:03:35 +0000 UTC" firstStartedPulling="2026-01-27 14:03:36.096295492 +0000 UTC m=+1174.408645577" lastFinishedPulling="2026-01-27 14:03:37.325671389 +0000 UTC m=+1175.638021474" observedRunningTime="2026-01-27 14:03:39.972891235 +0000 UTC m=+1178.285241340" watchObservedRunningTime="2026-01-27 14:03:39.980170155 +0000 UTC m=+1178.292520240" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.005466 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.368873 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f1c7-account-create-update-2j2jk"] Jan 27 14:03:40 crc kubenswrapper[4914]: E0127 14:03:40.369493 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2de2f0f-8048-4eee-8baf-991ad19a4ffa" containerName="init" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.369507 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2de2f0f-8048-4eee-8baf-991ad19a4ffa" containerName="init" Jan 27 14:03:40 crc kubenswrapper[4914]: E0127 14:03:40.369538 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d90dd2-f40d-4f5d-89a0-6008bebe15b0" containerName="init" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.369543 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d90dd2-f40d-4f5d-89a0-6008bebe15b0" containerName="init" Jan 27 14:03:40 crc kubenswrapper[4914]: E0127 14:03:40.369558 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2de2f0f-8048-4eee-8baf-991ad19a4ffa" containerName="dnsmasq-dns" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.369564 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2de2f0f-8048-4eee-8baf-991ad19a4ffa" containerName="dnsmasq-dns" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.369719 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d90dd2-f40d-4f5d-89a0-6008bebe15b0" containerName="init" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.369731 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2de2f0f-8048-4eee-8baf-991ad19a4ffa" containerName="dnsmasq-dns" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.370301 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f1c7-account-create-update-2j2jk" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.373200 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.383717 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f1c7-account-create-update-2j2jk"] Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.398534 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.398595 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.448736 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-nsvsv"] Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.450206 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nsvsv" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.458508 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nsvsv"] Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.495666 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.501731 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e88dce20-0af2-4799-8351-7ba637ee84c0-operator-scripts\") pod \"keystone-f1c7-account-create-update-2j2jk\" (UID: \"e88dce20-0af2-4799-8351-7ba637ee84c0\") " pod="openstack/keystone-f1c7-account-create-update-2j2jk" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.502606 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm5xq\" (UniqueName: \"kubernetes.io/projected/e88dce20-0af2-4799-8351-7ba637ee84c0-kube-api-access-cm5xq\") pod \"keystone-f1c7-account-create-update-2j2jk\" (UID: \"e88dce20-0af2-4799-8351-7ba637ee84c0\") " pod="openstack/keystone-f1c7-account-create-update-2j2jk" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.603898 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm5xq\" (UniqueName: \"kubernetes.io/projected/e88dce20-0af2-4799-8351-7ba637ee84c0-kube-api-access-cm5xq\") pod \"keystone-f1c7-account-create-update-2j2jk\" (UID: \"e88dce20-0af2-4799-8351-7ba637ee84c0\") " pod="openstack/keystone-f1c7-account-create-update-2j2jk" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.604212 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmwv4\" (UniqueName: \"kubernetes.io/projected/7f2122e6-777c-4be7-86ae-a16bd4255827-kube-api-access-wmwv4\") pod \"keystone-db-create-nsvsv\" (UID: \"7f2122e6-777c-4be7-86ae-a16bd4255827\") " pod="openstack/keystone-db-create-nsvsv" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.604331 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f2122e6-777c-4be7-86ae-a16bd4255827-operator-scripts\") pod \"keystone-db-create-nsvsv\" (UID: \"7f2122e6-777c-4be7-86ae-a16bd4255827\") " pod="openstack/keystone-db-create-nsvsv" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.604388 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e88dce20-0af2-4799-8351-7ba637ee84c0-operator-scripts\") pod \"keystone-f1c7-account-create-update-2j2jk\" (UID: \"e88dce20-0af2-4799-8351-7ba637ee84c0\") " pod="openstack/keystone-f1c7-account-create-update-2j2jk" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.605158 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e88dce20-0af2-4799-8351-7ba637ee84c0-operator-scripts\") pod \"keystone-f1c7-account-create-update-2j2jk\" (UID: \"e88dce20-0af2-4799-8351-7ba637ee84c0\") " pod="openstack/keystone-f1c7-account-create-update-2j2jk" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.619251 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-cl47n"] Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.620443 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cl47n" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.628326 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm5xq\" (UniqueName: \"kubernetes.io/projected/e88dce20-0af2-4799-8351-7ba637ee84c0-kube-api-access-cm5xq\") pod \"keystone-f1c7-account-create-update-2j2jk\" (UID: \"e88dce20-0af2-4799-8351-7ba637ee84c0\") " pod="openstack/keystone-f1c7-account-create-update-2j2jk" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.629076 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cl47n"] Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.699574 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f1c7-account-create-update-2j2jk" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.705755 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f2122e6-777c-4be7-86ae-a16bd4255827-operator-scripts\") pod \"keystone-db-create-nsvsv\" (UID: \"7f2122e6-777c-4be7-86ae-a16bd4255827\") " pod="openstack/keystone-db-create-nsvsv" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.705872 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0f14b57-bd10-4e82-ac68-aac2faa80f49-operator-scripts\") pod \"placement-db-create-cl47n\" (UID: \"c0f14b57-bd10-4e82-ac68-aac2faa80f49\") " pod="openstack/placement-db-create-cl47n" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.705900 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qlnx\" (UniqueName: \"kubernetes.io/projected/c0f14b57-bd10-4e82-ac68-aac2faa80f49-kube-api-access-4qlnx\") pod \"placement-db-create-cl47n\" (UID: \"c0f14b57-bd10-4e82-ac68-aac2faa80f49\") " pod="openstack/placement-db-create-cl47n" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.705947 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmwv4\" (UniqueName: \"kubernetes.io/projected/7f2122e6-777c-4be7-86ae-a16bd4255827-kube-api-access-wmwv4\") pod \"keystone-db-create-nsvsv\" (UID: \"7f2122e6-777c-4be7-86ae-a16bd4255827\") " pod="openstack/keystone-db-create-nsvsv" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.706854 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f2122e6-777c-4be7-86ae-a16bd4255827-operator-scripts\") pod \"keystone-db-create-nsvsv\" (UID: \"7f2122e6-777c-4be7-86ae-a16bd4255827\") " pod="openstack/keystone-db-create-nsvsv" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.723162 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7459-account-create-update-wjgrv"] Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.724295 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7459-account-create-update-wjgrv" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.727193 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.739053 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7459-account-create-update-wjgrv"] Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.755727 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmwv4\" (UniqueName: \"kubernetes.io/projected/7f2122e6-777c-4be7-86ae-a16bd4255827-kube-api-access-wmwv4\") pod \"keystone-db-create-nsvsv\" (UID: \"7f2122e6-777c-4be7-86ae-a16bd4255827\") " pod="openstack/keystone-db-create-nsvsv" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.805432 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nsvsv" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.806761 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0f14b57-bd10-4e82-ac68-aac2faa80f49-operator-scripts\") pod \"placement-db-create-cl47n\" (UID: \"c0f14b57-bd10-4e82-ac68-aac2faa80f49\") " pod="openstack/placement-db-create-cl47n" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.806807 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qlnx\" (UniqueName: \"kubernetes.io/projected/c0f14b57-bd10-4e82-ac68-aac2faa80f49-kube-api-access-4qlnx\") pod \"placement-db-create-cl47n\" (UID: \"c0f14b57-bd10-4e82-ac68-aac2faa80f49\") " pod="openstack/placement-db-create-cl47n" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.806928 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77v4v\" (UniqueName: \"kubernetes.io/projected/e3e09913-1b2c-459a-a120-f0d61eedec2a-kube-api-access-77v4v\") pod \"placement-7459-account-create-update-wjgrv\" (UID: \"e3e09913-1b2c-459a-a120-f0d61eedec2a\") " pod="openstack/placement-7459-account-create-update-wjgrv" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.806965 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3e09913-1b2c-459a-a120-f0d61eedec2a-operator-scripts\") pod \"placement-7459-account-create-update-wjgrv\" (UID: \"e3e09913-1b2c-459a-a120-f0d61eedec2a\") " pod="openstack/placement-7459-account-create-update-wjgrv" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.808415 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0f14b57-bd10-4e82-ac68-aac2faa80f49-operator-scripts\") pod \"placement-db-create-cl47n\" (UID: \"c0f14b57-bd10-4e82-ac68-aac2faa80f49\") " pod="openstack/placement-db-create-cl47n" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.828398 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qlnx\" (UniqueName: \"kubernetes.io/projected/c0f14b57-bd10-4e82-ac68-aac2faa80f49-kube-api-access-4qlnx\") pod \"placement-db-create-cl47n\" (UID: \"c0f14b57-bd10-4e82-ac68-aac2faa80f49\") " pod="openstack/placement-db-create-cl47n" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.908416 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77v4v\" (UniqueName: \"kubernetes.io/projected/e3e09913-1b2c-459a-a120-f0d61eedec2a-kube-api-access-77v4v\") pod \"placement-7459-account-create-update-wjgrv\" (UID: \"e3e09913-1b2c-459a-a120-f0d61eedec2a\") " pod="openstack/placement-7459-account-create-update-wjgrv" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.909755 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3e09913-1b2c-459a-a120-f0d61eedec2a-operator-scripts\") pod \"placement-7459-account-create-update-wjgrv\" (UID: \"e3e09913-1b2c-459a-a120-f0d61eedec2a\") " pod="openstack/placement-7459-account-create-update-wjgrv" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.917847 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3e09913-1b2c-459a-a120-f0d61eedec2a-operator-scripts\") pod \"placement-7459-account-create-update-wjgrv\" (UID: \"e3e09913-1b2c-459a-a120-f0d61eedec2a\") " pod="openstack/placement-7459-account-create-update-wjgrv" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.934862 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77v4v\" (UniqueName: \"kubernetes.io/projected/e3e09913-1b2c-459a-a120-f0d61eedec2a-kube-api-access-77v4v\") pod \"placement-7459-account-create-update-wjgrv\" (UID: \"e3e09913-1b2c-459a-a120-f0d61eedec2a\") " pod="openstack/placement-7459-account-create-update-wjgrv" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.939545 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-hs9fd"] Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.940615 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hs9fd" Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.964570 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-hs9fd"] Jan 27 14:03:40 crc kubenswrapper[4914]: I0127 14:03:40.966846 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cl47n" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.023367 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6359c6f6-67f2-4b56-8ff9-57f336621b20-operator-scripts\") pod \"glance-db-create-hs9fd\" (UID: \"6359c6f6-67f2-4b56-8ff9-57f336621b20\") " pod="openstack/glance-db-create-hs9fd" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.023573 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9b7v\" (UniqueName: \"kubernetes.io/projected/6359c6f6-67f2-4b56-8ff9-57f336621b20-kube-api-access-t9b7v\") pod \"glance-db-create-hs9fd\" (UID: \"6359c6f6-67f2-4b56-8ff9-57f336621b20\") " pod="openstack/glance-db-create-hs9fd" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.031704 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-c405-account-create-update-mr8x8"] Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.033079 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c405-account-create-update-mr8x8" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.038764 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.045407 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c405-account-create-update-mr8x8"] Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.063492 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.127363 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7459-account-create-update-wjgrv" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.130436 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6359c6f6-67f2-4b56-8ff9-57f336621b20-operator-scripts\") pod \"glance-db-create-hs9fd\" (UID: \"6359c6f6-67f2-4b56-8ff9-57f336621b20\") " pod="openstack/glance-db-create-hs9fd" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.130538 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9b7v\" (UniqueName: \"kubernetes.io/projected/6359c6f6-67f2-4b56-8ff9-57f336621b20-kube-api-access-t9b7v\") pod \"glance-db-create-hs9fd\" (UID: \"6359c6f6-67f2-4b56-8ff9-57f336621b20\") " pod="openstack/glance-db-create-hs9fd" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.130599 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6q8f\" (UniqueName: \"kubernetes.io/projected/27c36820-0762-4659-943c-2748a1bc3ca7-kube-api-access-n6q8f\") pod \"glance-c405-account-create-update-mr8x8\" (UID: \"27c36820-0762-4659-943c-2748a1bc3ca7\") " pod="openstack/glance-c405-account-create-update-mr8x8" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.130726 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c36820-0762-4659-943c-2748a1bc3ca7-operator-scripts\") pod \"glance-c405-account-create-update-mr8x8\" (UID: \"27c36820-0762-4659-943c-2748a1bc3ca7\") " pod="openstack/glance-c405-account-create-update-mr8x8" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.132371 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6359c6f6-67f2-4b56-8ff9-57f336621b20-operator-scripts\") pod \"glance-db-create-hs9fd\" (UID: \"6359c6f6-67f2-4b56-8ff9-57f336621b20\") " pod="openstack/glance-db-create-hs9fd" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.144249 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f1c7-account-create-update-2j2jk"] Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.154212 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9b7v\" (UniqueName: \"kubernetes.io/projected/6359c6f6-67f2-4b56-8ff9-57f336621b20-kube-api-access-t9b7v\") pod \"glance-db-create-hs9fd\" (UID: \"6359c6f6-67f2-4b56-8ff9-57f336621b20\") " pod="openstack/glance-db-create-hs9fd" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.234056 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6q8f\" (UniqueName: \"kubernetes.io/projected/27c36820-0762-4659-943c-2748a1bc3ca7-kube-api-access-n6q8f\") pod \"glance-c405-account-create-update-mr8x8\" (UID: \"27c36820-0762-4659-943c-2748a1bc3ca7\") " pod="openstack/glance-c405-account-create-update-mr8x8" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.234458 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c36820-0762-4659-943c-2748a1bc3ca7-operator-scripts\") pod \"glance-c405-account-create-update-mr8x8\" (UID: \"27c36820-0762-4659-943c-2748a1bc3ca7\") " pod="openstack/glance-c405-account-create-update-mr8x8" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.237020 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c36820-0762-4659-943c-2748a1bc3ca7-operator-scripts\") pod \"glance-c405-account-create-update-mr8x8\" (UID: \"27c36820-0762-4659-943c-2748a1bc3ca7\") " pod="openstack/glance-c405-account-create-update-mr8x8" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.249746 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6q8f\" (UniqueName: \"kubernetes.io/projected/27c36820-0762-4659-943c-2748a1bc3ca7-kube-api-access-n6q8f\") pod \"glance-c405-account-create-update-mr8x8\" (UID: \"27c36820-0762-4659-943c-2748a1bc3ca7\") " pod="openstack/glance-c405-account-create-update-mr8x8" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.277941 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hs9fd" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.306884 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nsvsv"] Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.355806 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c405-account-create-update-mr8x8" Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.533779 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cl47n"] Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.595627 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7459-account-create-update-wjgrv"] Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.812586 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-hs9fd"] Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.971236 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7459-account-create-update-wjgrv" event={"ID":"e3e09913-1b2c-459a-a120-f0d61eedec2a","Type":"ContainerStarted","Data":"ec22959605f759cb5f8b36ae8e82552412c93d9127bf377f7aa5421c030bf008"} Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.974070 4914 generic.go:334] "Generic (PLEG): container finished" podID="e88dce20-0af2-4799-8351-7ba637ee84c0" containerID="7825002cc51f2d69f0e68704b2045db7a0056d81de6eaea697547110ced248a5" exitCode=0 Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.974175 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f1c7-account-create-update-2j2jk" event={"ID":"e88dce20-0af2-4799-8351-7ba637ee84c0","Type":"ContainerDied","Data":"7825002cc51f2d69f0e68704b2045db7a0056d81de6eaea697547110ced248a5"} Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.974427 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f1c7-account-create-update-2j2jk" event={"ID":"e88dce20-0af2-4799-8351-7ba637ee84c0","Type":"ContainerStarted","Data":"7dcb8042f73cf6f96c57a437384c82e245803347a790754779971d11b2ad8d96"} Jan 27 14:03:41 crc kubenswrapper[4914]: W0127 14:03:41.982443 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27c36820_0762_4659_943c_2748a1bc3ca7.slice/crio-59ccd276093283797f3b33e1d83a6a2f5b2db65e5ab7c94fd753954a5fb1bcd3 WatchSource:0}: Error finding container 59ccd276093283797f3b33e1d83a6a2f5b2db65e5ab7c94fd753954a5fb1bcd3: Status 404 returned error can't find the container with id 59ccd276093283797f3b33e1d83a6a2f5b2db65e5ab7c94fd753954a5fb1bcd3 Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.989137 4914 generic.go:334] "Generic (PLEG): container finished" podID="7f2122e6-777c-4be7-86ae-a16bd4255827" containerID="15782267b14023839eaed6430aa044ff92a42e347c8d0020606c8069b4cd578e" exitCode=0 Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.989277 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nsvsv" event={"ID":"7f2122e6-777c-4be7-86ae-a16bd4255827","Type":"ContainerDied","Data":"15782267b14023839eaed6430aa044ff92a42e347c8d0020606c8069b4cd578e"} Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.989302 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nsvsv" event={"ID":"7f2122e6-777c-4be7-86ae-a16bd4255827","Type":"ContainerStarted","Data":"cd859c5a8bf3a9dcfad94f2db84458e31e8a467f2eb4edd6022999f6258379f0"} Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.993884 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c405-account-create-update-mr8x8"] Jan 27 14:03:41 crc kubenswrapper[4914]: I0127 14:03:41.997490 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cl47n" event={"ID":"c0f14b57-bd10-4e82-ac68-aac2faa80f49","Type":"ContainerStarted","Data":"ff0d67f369badc4b03561390c4de65f439683813be7c7d5cc8bca3ec20a2ffcb"} Jan 27 14:03:42 crc kubenswrapper[4914]: I0127 14:03:42.000881 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hs9fd" event={"ID":"6359c6f6-67f2-4b56-8ff9-57f336621b20","Type":"ContainerStarted","Data":"138cf79aed239f335962b4e61e94b42e7bc9fc381d66315c6d424a8764e5c261"} Jan 27 14:03:42 crc kubenswrapper[4914]: I0127 14:03:42.024036 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-cl47n" podStartSLOduration=2.024016897 podStartE2EDuration="2.024016897s" podCreationTimestamp="2026-01-27 14:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:03:42.019620177 +0000 UTC m=+1180.331970262" watchObservedRunningTime="2026-01-27 14:03:42.024016897 +0000 UTC m=+1180.336367002" Jan 27 14:03:42 crc kubenswrapper[4914]: I0127 14:03:42.255909 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:42 crc kubenswrapper[4914]: E0127 14:03:42.256152 4914 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:03:42 crc kubenswrapper[4914]: E0127 14:03:42.256171 4914 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:03:42 crc kubenswrapper[4914]: E0127 14:03:42.256214 4914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift podName:62cc5d9e-afad-4888-9e8f-c57f7b185d2b nodeName:}" failed. No retries permitted until 2026-01-27 14:03:50.256199475 +0000 UTC m=+1188.568549560 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift") pod "swift-storage-0" (UID: "62cc5d9e-afad-4888-9e8f-c57f7b185d2b") : configmap "swift-ring-files" not found Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.008334 4914 generic.go:334] "Generic (PLEG): container finished" podID="6359c6f6-67f2-4b56-8ff9-57f336621b20" containerID="9682a771aa890361b063287af87e90505b32d5885983464b76c4471e0e8ddae1" exitCode=0 Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.008368 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hs9fd" event={"ID":"6359c6f6-67f2-4b56-8ff9-57f336621b20","Type":"ContainerDied","Data":"9682a771aa890361b063287af87e90505b32d5885983464b76c4471e0e8ddae1"} Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.011141 4914 generic.go:334] "Generic (PLEG): container finished" podID="e3e09913-1b2c-459a-a120-f0d61eedec2a" containerID="858f489db88bdd88ea71a3ca3bcf8831b5db12046f431de1171778c788340ac1" exitCode=0 Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.011266 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7459-account-create-update-wjgrv" event={"ID":"e3e09913-1b2c-459a-a120-f0d61eedec2a","Type":"ContainerDied","Data":"858f489db88bdd88ea71a3ca3bcf8831b5db12046f431de1171778c788340ac1"} Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.018500 4914 generic.go:334] "Generic (PLEG): container finished" podID="27c36820-0762-4659-943c-2748a1bc3ca7" containerID="e90da8565533a6bd73778722e664ba725c1eaf314a38012b0b941a6c1dd9d1fa" exitCode=0 Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.018569 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c405-account-create-update-mr8x8" event={"ID":"27c36820-0762-4659-943c-2748a1bc3ca7","Type":"ContainerDied","Data":"e90da8565533a6bd73778722e664ba725c1eaf314a38012b0b941a6c1dd9d1fa"} Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.018597 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c405-account-create-update-mr8x8" event={"ID":"27c36820-0762-4659-943c-2748a1bc3ca7","Type":"ContainerStarted","Data":"59ccd276093283797f3b33e1d83a6a2f5b2db65e5ab7c94fd753954a5fb1bcd3"} Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.021258 4914 generic.go:334] "Generic (PLEG): container finished" podID="c0f14b57-bd10-4e82-ac68-aac2faa80f49" containerID="84267457ba98b000110e443198b980446fcd0e488ec44c5e1ced3720f4eb58ad" exitCode=0 Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.021348 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cl47n" event={"ID":"c0f14b57-bd10-4e82-ac68-aac2faa80f49","Type":"ContainerDied","Data":"84267457ba98b000110e443198b980446fcd0e488ec44c5e1ced3720f4eb58ad"} Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.056568 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.455954 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f1c7-account-create-update-2j2jk" Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.465741 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nsvsv" Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.580790 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmwv4\" (UniqueName: \"kubernetes.io/projected/7f2122e6-777c-4be7-86ae-a16bd4255827-kube-api-access-wmwv4\") pod \"7f2122e6-777c-4be7-86ae-a16bd4255827\" (UID: \"7f2122e6-777c-4be7-86ae-a16bd4255827\") " Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.580911 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e88dce20-0af2-4799-8351-7ba637ee84c0-operator-scripts\") pod \"e88dce20-0af2-4799-8351-7ba637ee84c0\" (UID: \"e88dce20-0af2-4799-8351-7ba637ee84c0\") " Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.580975 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm5xq\" (UniqueName: \"kubernetes.io/projected/e88dce20-0af2-4799-8351-7ba637ee84c0-kube-api-access-cm5xq\") pod \"e88dce20-0af2-4799-8351-7ba637ee84c0\" (UID: \"e88dce20-0af2-4799-8351-7ba637ee84c0\") " Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.581030 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f2122e6-777c-4be7-86ae-a16bd4255827-operator-scripts\") pod \"7f2122e6-777c-4be7-86ae-a16bd4255827\" (UID: \"7f2122e6-777c-4be7-86ae-a16bd4255827\") " Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.581819 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e88dce20-0af2-4799-8351-7ba637ee84c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e88dce20-0af2-4799-8351-7ba637ee84c0" (UID: "e88dce20-0af2-4799-8351-7ba637ee84c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.581819 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2122e6-777c-4be7-86ae-a16bd4255827-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f2122e6-777c-4be7-86ae-a16bd4255827" (UID: "7f2122e6-777c-4be7-86ae-a16bd4255827"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.602027 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e88dce20-0af2-4799-8351-7ba637ee84c0-kube-api-access-cm5xq" (OuterVolumeSpecName: "kube-api-access-cm5xq") pod "e88dce20-0af2-4799-8351-7ba637ee84c0" (UID: "e88dce20-0af2-4799-8351-7ba637ee84c0"). InnerVolumeSpecName "kube-api-access-cm5xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.602082 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2122e6-777c-4be7-86ae-a16bd4255827-kube-api-access-wmwv4" (OuterVolumeSpecName: "kube-api-access-wmwv4") pod "7f2122e6-777c-4be7-86ae-a16bd4255827" (UID: "7f2122e6-777c-4be7-86ae-a16bd4255827"). InnerVolumeSpecName "kube-api-access-wmwv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.682632 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmwv4\" (UniqueName: \"kubernetes.io/projected/7f2122e6-777c-4be7-86ae-a16bd4255827-kube-api-access-wmwv4\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.682659 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e88dce20-0af2-4799-8351-7ba637ee84c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.682669 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm5xq\" (UniqueName: \"kubernetes.io/projected/e88dce20-0af2-4799-8351-7ba637ee84c0-kube-api-access-cm5xq\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:43 crc kubenswrapper[4914]: I0127 14:03:43.682677 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f2122e6-777c-4be7-86ae-a16bd4255827-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.030729 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nsvsv" event={"ID":"7f2122e6-777c-4be7-86ae-a16bd4255827","Type":"ContainerDied","Data":"cd859c5a8bf3a9dcfad94f2db84458e31e8a467f2eb4edd6022999f6258379f0"} Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.030760 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nsvsv" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.030984 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd859c5a8bf3a9dcfad94f2db84458e31e8a467f2eb4edd6022999f6258379f0" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.040563 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f1c7-account-create-update-2j2jk" event={"ID":"e88dce20-0af2-4799-8351-7ba637ee84c0","Type":"ContainerDied","Data":"7dcb8042f73cf6f96c57a437384c82e245803347a790754779971d11b2ad8d96"} Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.040621 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dcb8042f73cf6f96c57a437384c82e245803347a790754779971d11b2ad8d96" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.040575 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f1c7-account-create-update-2j2jk" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.343590 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7459-account-create-update-wjgrv" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.396345 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77v4v\" (UniqueName: \"kubernetes.io/projected/e3e09913-1b2c-459a-a120-f0d61eedec2a-kube-api-access-77v4v\") pod \"e3e09913-1b2c-459a-a120-f0d61eedec2a\" (UID: \"e3e09913-1b2c-459a-a120-f0d61eedec2a\") " Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.396404 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3e09913-1b2c-459a-a120-f0d61eedec2a-operator-scripts\") pod \"e3e09913-1b2c-459a-a120-f0d61eedec2a\" (UID: \"e3e09913-1b2c-459a-a120-f0d61eedec2a\") " Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.398295 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3e09913-1b2c-459a-a120-f0d61eedec2a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e3e09913-1b2c-459a-a120-f0d61eedec2a" (UID: "e3e09913-1b2c-459a-a120-f0d61eedec2a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.460818 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3e09913-1b2c-459a-a120-f0d61eedec2a-kube-api-access-77v4v" (OuterVolumeSpecName: "kube-api-access-77v4v") pod "e3e09913-1b2c-459a-a120-f0d61eedec2a" (UID: "e3e09913-1b2c-459a-a120-f0d61eedec2a"). InnerVolumeSpecName "kube-api-access-77v4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.481823 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cl47n" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.502258 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77v4v\" (UniqueName: \"kubernetes.io/projected/e3e09913-1b2c-459a-a120-f0d61eedec2a-kube-api-access-77v4v\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.502298 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3e09913-1b2c-459a-a120-f0d61eedec2a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.519985 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c405-account-create-update-mr8x8" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.547935 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hs9fd" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.603453 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6359c6f6-67f2-4b56-8ff9-57f336621b20-operator-scripts\") pod \"6359c6f6-67f2-4b56-8ff9-57f336621b20\" (UID: \"6359c6f6-67f2-4b56-8ff9-57f336621b20\") " Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.603738 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6q8f\" (UniqueName: \"kubernetes.io/projected/27c36820-0762-4659-943c-2748a1bc3ca7-kube-api-access-n6q8f\") pod \"27c36820-0762-4659-943c-2748a1bc3ca7\" (UID: \"27c36820-0762-4659-943c-2748a1bc3ca7\") " Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.603987 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9b7v\" (UniqueName: \"kubernetes.io/projected/6359c6f6-67f2-4b56-8ff9-57f336621b20-kube-api-access-t9b7v\") pod \"6359c6f6-67f2-4b56-8ff9-57f336621b20\" (UID: \"6359c6f6-67f2-4b56-8ff9-57f336621b20\") " Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.604112 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c36820-0762-4659-943c-2748a1bc3ca7-operator-scripts\") pod \"27c36820-0762-4659-943c-2748a1bc3ca7\" (UID: \"27c36820-0762-4659-943c-2748a1bc3ca7\") " Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.604224 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0f14b57-bd10-4e82-ac68-aac2faa80f49-operator-scripts\") pod \"c0f14b57-bd10-4e82-ac68-aac2faa80f49\" (UID: \"c0f14b57-bd10-4e82-ac68-aac2faa80f49\") " Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.604295 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6359c6f6-67f2-4b56-8ff9-57f336621b20-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6359c6f6-67f2-4b56-8ff9-57f336621b20" (UID: "6359c6f6-67f2-4b56-8ff9-57f336621b20"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.604386 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qlnx\" (UniqueName: \"kubernetes.io/projected/c0f14b57-bd10-4e82-ac68-aac2faa80f49-kube-api-access-4qlnx\") pod \"c0f14b57-bd10-4e82-ac68-aac2faa80f49\" (UID: \"c0f14b57-bd10-4e82-ac68-aac2faa80f49\") " Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.604710 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c36820-0762-4659-943c-2748a1bc3ca7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27c36820-0762-4659-943c-2748a1bc3ca7" (UID: "27c36820-0762-4659-943c-2748a1bc3ca7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.604985 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c36820-0762-4659-943c-2748a1bc3ca7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.605072 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6359c6f6-67f2-4b56-8ff9-57f336621b20-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.605040 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0f14b57-bd10-4e82-ac68-aac2faa80f49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0f14b57-bd10-4e82-ac68-aac2faa80f49" (UID: "c0f14b57-bd10-4e82-ac68-aac2faa80f49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.607586 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c36820-0762-4659-943c-2748a1bc3ca7-kube-api-access-n6q8f" (OuterVolumeSpecName: "kube-api-access-n6q8f") pod "27c36820-0762-4659-943c-2748a1bc3ca7" (UID: "27c36820-0762-4659-943c-2748a1bc3ca7"). InnerVolumeSpecName "kube-api-access-n6q8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.607585 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6359c6f6-67f2-4b56-8ff9-57f336621b20-kube-api-access-t9b7v" (OuterVolumeSpecName: "kube-api-access-t9b7v") pod "6359c6f6-67f2-4b56-8ff9-57f336621b20" (UID: "6359c6f6-67f2-4b56-8ff9-57f336621b20"). InnerVolumeSpecName "kube-api-access-t9b7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.608769 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0f14b57-bd10-4e82-ac68-aac2faa80f49-kube-api-access-4qlnx" (OuterVolumeSpecName: "kube-api-access-4qlnx") pod "c0f14b57-bd10-4e82-ac68-aac2faa80f49" (UID: "c0f14b57-bd10-4e82-ac68-aac2faa80f49"). InnerVolumeSpecName "kube-api-access-4qlnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.706694 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6q8f\" (UniqueName: \"kubernetes.io/projected/27c36820-0762-4659-943c-2748a1bc3ca7-kube-api-access-n6q8f\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.706725 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9b7v\" (UniqueName: \"kubernetes.io/projected/6359c6f6-67f2-4b56-8ff9-57f336621b20-kube-api-access-t9b7v\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.706735 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0f14b57-bd10-4e82-ac68-aac2faa80f49-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.706745 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qlnx\" (UniqueName: \"kubernetes.io/projected/c0f14b57-bd10-4e82-ac68-aac2faa80f49-kube-api-access-4qlnx\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.851780 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.898975 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-sjkwk"] Jan 27 14:03:44 crc kubenswrapper[4914]: I0127 14:03:44.899379 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" podUID="46aa6aa7-cacb-4442-9ff2-04962172adae" containerName="dnsmasq-dns" containerID="cri-o://d7a8524e33e35c61125cf1b15c3d19ff911a119c14583a02a3deb5c95d1d7dc3" gracePeriod=10 Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.051278 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c405-account-create-update-mr8x8" event={"ID":"27c36820-0762-4659-943c-2748a1bc3ca7","Type":"ContainerDied","Data":"59ccd276093283797f3b33e1d83a6a2f5b2db65e5ab7c94fd753954a5fb1bcd3"} Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.051565 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59ccd276093283797f3b33e1d83a6a2f5b2db65e5ab7c94fd753954a5fb1bcd3" Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.051530 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c405-account-create-update-mr8x8" Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.052823 4914 generic.go:334] "Generic (PLEG): container finished" podID="46aa6aa7-cacb-4442-9ff2-04962172adae" containerID="d7a8524e33e35c61125cf1b15c3d19ff911a119c14583a02a3deb5c95d1d7dc3" exitCode=0 Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.052899 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" event={"ID":"46aa6aa7-cacb-4442-9ff2-04962172adae","Type":"ContainerDied","Data":"d7a8524e33e35c61125cf1b15c3d19ff911a119c14583a02a3deb5c95d1d7dc3"} Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.057634 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cl47n" Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.057659 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cl47n" event={"ID":"c0f14b57-bd10-4e82-ac68-aac2faa80f49","Type":"ContainerDied","Data":"ff0d67f369badc4b03561390c4de65f439683813be7c7d5cc8bca3ec20a2ffcb"} Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.057715 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff0d67f369badc4b03561390c4de65f439683813be7c7d5cc8bca3ec20a2ffcb" Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.060887 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hs9fd" Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.061166 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hs9fd" event={"ID":"6359c6f6-67f2-4b56-8ff9-57f336621b20","Type":"ContainerDied","Data":"138cf79aed239f335962b4e61e94b42e7bc9fc381d66315c6d424a8764e5c261"} Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.061214 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="138cf79aed239f335962b4e61e94b42e7bc9fc381d66315c6d424a8764e5c261" Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.066492 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7459-account-create-update-wjgrv" Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.066453 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7459-account-create-update-wjgrv" event={"ID":"e3e09913-1b2c-459a-a120-f0d61eedec2a","Type":"ContainerDied","Data":"ec22959605f759cb5f8b36ae8e82552412c93d9127bf377f7aa5421c030bf008"} Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.066797 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec22959605f759cb5f8b36ae8e82552412c93d9127bf377f7aa5421c030bf008" Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.353028 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.423094 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k92fn\" (UniqueName: \"kubernetes.io/projected/46aa6aa7-cacb-4442-9ff2-04962172adae-kube-api-access-k92fn\") pod \"46aa6aa7-cacb-4442-9ff2-04962172adae\" (UID: \"46aa6aa7-cacb-4442-9ff2-04962172adae\") " Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.423153 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46aa6aa7-cacb-4442-9ff2-04962172adae-config\") pod \"46aa6aa7-cacb-4442-9ff2-04962172adae\" (UID: \"46aa6aa7-cacb-4442-9ff2-04962172adae\") " Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.423303 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46aa6aa7-cacb-4442-9ff2-04962172adae-dns-svc\") pod \"46aa6aa7-cacb-4442-9ff2-04962172adae\" (UID: \"46aa6aa7-cacb-4442-9ff2-04962172adae\") " Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.426535 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46aa6aa7-cacb-4442-9ff2-04962172adae-kube-api-access-k92fn" (OuterVolumeSpecName: "kube-api-access-k92fn") pod "46aa6aa7-cacb-4442-9ff2-04962172adae" (UID: "46aa6aa7-cacb-4442-9ff2-04962172adae"). InnerVolumeSpecName "kube-api-access-k92fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.461469 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46aa6aa7-cacb-4442-9ff2-04962172adae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46aa6aa7-cacb-4442-9ff2-04962172adae" (UID: "46aa6aa7-cacb-4442-9ff2-04962172adae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.474622 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46aa6aa7-cacb-4442-9ff2-04962172adae-config" (OuterVolumeSpecName: "config") pod "46aa6aa7-cacb-4442-9ff2-04962172adae" (UID: "46aa6aa7-cacb-4442-9ff2-04962172adae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.525309 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k92fn\" (UniqueName: \"kubernetes.io/projected/46aa6aa7-cacb-4442-9ff2-04962172adae-kube-api-access-k92fn\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.525340 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46aa6aa7-cacb-4442-9ff2-04962172adae-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:45 crc kubenswrapper[4914]: I0127 14:03:45.525349 4914 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46aa6aa7-cacb-4442-9ff2-04962172adae-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.075407 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.075393 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-sjkwk" event={"ID":"46aa6aa7-cacb-4442-9ff2-04962172adae","Type":"ContainerDied","Data":"92ad538134a7da6632d237af57570c4f5a599e3c2f5bd5b951b1058bde8b6e3a"} Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.075766 4914 scope.go:117] "RemoveContainer" containerID="d7a8524e33e35c61125cf1b15c3d19ff911a119c14583a02a3deb5c95d1d7dc3" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.096289 4914 scope.go:117] "RemoveContainer" containerID="04956f3ea9ad99e0f0760fade36dcb30dfa5a8bd75fab400c61cf927f8549ca6" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.113053 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-sjkwk"] Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.121225 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-sjkwk"] Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.256329 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-89vdt"] Jan 27 14:03:46 crc kubenswrapper[4914]: E0127 14:03:46.257132 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e88dce20-0af2-4799-8351-7ba637ee84c0" containerName="mariadb-account-create-update" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.257174 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="e88dce20-0af2-4799-8351-7ba637ee84c0" containerName="mariadb-account-create-update" Jan 27 14:03:46 crc kubenswrapper[4914]: E0127 14:03:46.257193 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6359c6f6-67f2-4b56-8ff9-57f336621b20" containerName="mariadb-database-create" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.257201 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="6359c6f6-67f2-4b56-8ff9-57f336621b20" containerName="mariadb-database-create" Jan 27 14:03:46 crc kubenswrapper[4914]: E0127 14:03:46.257217 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2122e6-777c-4be7-86ae-a16bd4255827" containerName="mariadb-database-create" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.257224 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2122e6-777c-4be7-86ae-a16bd4255827" containerName="mariadb-database-create" Jan 27 14:03:46 crc kubenswrapper[4914]: E0127 14:03:46.257235 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46aa6aa7-cacb-4442-9ff2-04962172adae" containerName="init" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.257242 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="46aa6aa7-cacb-4442-9ff2-04962172adae" containerName="init" Jan 27 14:03:46 crc kubenswrapper[4914]: E0127 14:03:46.257267 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e09913-1b2c-459a-a120-f0d61eedec2a" containerName="mariadb-account-create-update" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.257275 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e09913-1b2c-459a-a120-f0d61eedec2a" containerName="mariadb-account-create-update" Jan 27 14:03:46 crc kubenswrapper[4914]: E0127 14:03:46.257316 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0f14b57-bd10-4e82-ac68-aac2faa80f49" containerName="mariadb-database-create" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.257329 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0f14b57-bd10-4e82-ac68-aac2faa80f49" containerName="mariadb-database-create" Jan 27 14:03:46 crc kubenswrapper[4914]: E0127 14:03:46.257338 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c36820-0762-4659-943c-2748a1bc3ca7" containerName="mariadb-account-create-update" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.257344 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c36820-0762-4659-943c-2748a1bc3ca7" containerName="mariadb-account-create-update" Jan 27 14:03:46 crc kubenswrapper[4914]: E0127 14:03:46.257353 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46aa6aa7-cacb-4442-9ff2-04962172adae" containerName="dnsmasq-dns" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.257358 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="46aa6aa7-cacb-4442-9ff2-04962172adae" containerName="dnsmasq-dns" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.257677 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="46aa6aa7-cacb-4442-9ff2-04962172adae" containerName="dnsmasq-dns" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.257693 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0f14b57-bd10-4e82-ac68-aac2faa80f49" containerName="mariadb-database-create" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.257702 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c36820-0762-4659-943c-2748a1bc3ca7" containerName="mariadb-account-create-update" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.257716 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="e88dce20-0af2-4799-8351-7ba637ee84c0" containerName="mariadb-account-create-update" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.257727 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3e09913-1b2c-459a-a120-f0d61eedec2a" containerName="mariadb-account-create-update" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.257746 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f2122e6-777c-4be7-86ae-a16bd4255827" containerName="mariadb-database-create" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.257757 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="6359c6f6-67f2-4b56-8ff9-57f336621b20" containerName="mariadb-database-create" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.258664 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-89vdt" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.260897 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.262364 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-h2zsk" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.270063 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-89vdt"] Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.307304 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46aa6aa7-cacb-4442-9ff2-04962172adae" path="/var/lib/kubelet/pods/46aa6aa7-cacb-4442-9ff2-04962172adae/volumes" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.350213 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx74v\" (UniqueName: \"kubernetes.io/projected/54c4cebf-28fc-49cf-93e1-c10215cd7c85-kube-api-access-gx74v\") pod \"glance-db-sync-89vdt\" (UID: \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\") " pod="openstack/glance-db-sync-89vdt" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.350295 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-combined-ca-bundle\") pod \"glance-db-sync-89vdt\" (UID: \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\") " pod="openstack/glance-db-sync-89vdt" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.350350 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-db-sync-config-data\") pod \"glance-db-sync-89vdt\" (UID: \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\") " pod="openstack/glance-db-sync-89vdt" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.350385 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-config-data\") pod \"glance-db-sync-89vdt\" (UID: \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\") " pod="openstack/glance-db-sync-89vdt" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.451423 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx74v\" (UniqueName: \"kubernetes.io/projected/54c4cebf-28fc-49cf-93e1-c10215cd7c85-kube-api-access-gx74v\") pod \"glance-db-sync-89vdt\" (UID: \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\") " pod="openstack/glance-db-sync-89vdt" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.451742 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-combined-ca-bundle\") pod \"glance-db-sync-89vdt\" (UID: \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\") " pod="openstack/glance-db-sync-89vdt" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.451876 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-db-sync-config-data\") pod \"glance-db-sync-89vdt\" (UID: \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\") " pod="openstack/glance-db-sync-89vdt" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.451970 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-config-data\") pod \"glance-db-sync-89vdt\" (UID: \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\") " pod="openstack/glance-db-sync-89vdt" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.457141 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-db-sync-config-data\") pod \"glance-db-sync-89vdt\" (UID: \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\") " pod="openstack/glance-db-sync-89vdt" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.459518 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-combined-ca-bundle\") pod \"glance-db-sync-89vdt\" (UID: \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\") " pod="openstack/glance-db-sync-89vdt" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.459851 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-config-data\") pod \"glance-db-sync-89vdt\" (UID: \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\") " pod="openstack/glance-db-sync-89vdt" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.481860 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx74v\" (UniqueName: \"kubernetes.io/projected/54c4cebf-28fc-49cf-93e1-c10215cd7c85-kube-api-access-gx74v\") pod \"glance-db-sync-89vdt\" (UID: \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\") " pod="openstack/glance-db-sync-89vdt" Jan 27 14:03:46 crc kubenswrapper[4914]: I0127 14:03:46.575801 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-89vdt" Jan 27 14:03:47 crc kubenswrapper[4914]: I0127 14:03:47.084217 4914 generic.go:334] "Generic (PLEG): container finished" podID="0d441d11-3241-45da-8bcf-c95636d3efa9" containerID="5c4dfa95545a84a0fd222a9275bc69b3f002f05ea1b8b4822b79a9ed7b5a6555" exitCode=0 Jan 27 14:03:47 crc kubenswrapper[4914]: I0127 14:03:47.084347 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4f8dg" event={"ID":"0d441d11-3241-45da-8bcf-c95636d3efa9","Type":"ContainerDied","Data":"5c4dfa95545a84a0fd222a9275bc69b3f002f05ea1b8b4822b79a9ed7b5a6555"} Jan 27 14:03:47 crc kubenswrapper[4914]: I0127 14:03:47.163236 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-89vdt"] Jan 27 14:03:47 crc kubenswrapper[4914]: I0127 14:03:47.785609 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-p9dmr"] Jan 27 14:03:47 crc kubenswrapper[4914]: I0127 14:03:47.786594 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p9dmr" Jan 27 14:03:47 crc kubenswrapper[4914]: I0127 14:03:47.788677 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 14:03:47 crc kubenswrapper[4914]: I0127 14:03:47.800925 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p9dmr"] Jan 27 14:03:47 crc kubenswrapper[4914]: I0127 14:03:47.900587 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59fcd83f-be4c-4d89-9427-92025120e63f-operator-scripts\") pod \"root-account-create-update-p9dmr\" (UID: \"59fcd83f-be4c-4d89-9427-92025120e63f\") " pod="openstack/root-account-create-update-p9dmr" Jan 27 14:03:47 crc kubenswrapper[4914]: I0127 14:03:47.900677 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcsjg\" (UniqueName: \"kubernetes.io/projected/59fcd83f-be4c-4d89-9427-92025120e63f-kube-api-access-zcsjg\") pod \"root-account-create-update-p9dmr\" (UID: \"59fcd83f-be4c-4d89-9427-92025120e63f\") " pod="openstack/root-account-create-update-p9dmr" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.001766 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcsjg\" (UniqueName: \"kubernetes.io/projected/59fcd83f-be4c-4d89-9427-92025120e63f-kube-api-access-zcsjg\") pod \"root-account-create-update-p9dmr\" (UID: \"59fcd83f-be4c-4d89-9427-92025120e63f\") " pod="openstack/root-account-create-update-p9dmr" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.001907 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59fcd83f-be4c-4d89-9427-92025120e63f-operator-scripts\") pod \"root-account-create-update-p9dmr\" (UID: \"59fcd83f-be4c-4d89-9427-92025120e63f\") " pod="openstack/root-account-create-update-p9dmr" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.002741 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59fcd83f-be4c-4d89-9427-92025120e63f-operator-scripts\") pod \"root-account-create-update-p9dmr\" (UID: \"59fcd83f-be4c-4d89-9427-92025120e63f\") " pod="openstack/root-account-create-update-p9dmr" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.021241 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcsjg\" (UniqueName: \"kubernetes.io/projected/59fcd83f-be4c-4d89-9427-92025120e63f-kube-api-access-zcsjg\") pod \"root-account-create-update-p9dmr\" (UID: \"59fcd83f-be4c-4d89-9427-92025120e63f\") " pod="openstack/root-account-create-update-p9dmr" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.097987 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-89vdt" event={"ID":"54c4cebf-28fc-49cf-93e1-c10215cd7c85","Type":"ContainerStarted","Data":"2272db3c28e33b6c8036475c014cda67f01001bd191ecc2c053e31521f8356ca"} Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.113327 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p9dmr" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.389752 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.514280 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-swiftconf\") pod \"0d441d11-3241-45da-8bcf-c95636d3efa9\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.514422 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d441d11-3241-45da-8bcf-c95636d3efa9-scripts\") pod \"0d441d11-3241-45da-8bcf-c95636d3efa9\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.515019 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0d441d11-3241-45da-8bcf-c95636d3efa9-ring-data-devices\") pod \"0d441d11-3241-45da-8bcf-c95636d3efa9\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.515056 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-dispersionconf\") pod \"0d441d11-3241-45da-8bcf-c95636d3efa9\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.515083 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-combined-ca-bundle\") pod \"0d441d11-3241-45da-8bcf-c95636d3efa9\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.515101 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbzql\" (UniqueName: \"kubernetes.io/projected/0d441d11-3241-45da-8bcf-c95636d3efa9-kube-api-access-gbzql\") pod \"0d441d11-3241-45da-8bcf-c95636d3efa9\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.515127 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0d441d11-3241-45da-8bcf-c95636d3efa9-etc-swift\") pod \"0d441d11-3241-45da-8bcf-c95636d3efa9\" (UID: \"0d441d11-3241-45da-8bcf-c95636d3efa9\") " Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.516141 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d441d11-3241-45da-8bcf-c95636d3efa9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "0d441d11-3241-45da-8bcf-c95636d3efa9" (UID: "0d441d11-3241-45da-8bcf-c95636d3efa9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.516181 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d441d11-3241-45da-8bcf-c95636d3efa9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0d441d11-3241-45da-8bcf-c95636d3efa9" (UID: "0d441d11-3241-45da-8bcf-c95636d3efa9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.519445 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d441d11-3241-45da-8bcf-c95636d3efa9-kube-api-access-gbzql" (OuterVolumeSpecName: "kube-api-access-gbzql") pod "0d441d11-3241-45da-8bcf-c95636d3efa9" (UID: "0d441d11-3241-45da-8bcf-c95636d3efa9"). InnerVolumeSpecName "kube-api-access-gbzql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.521538 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "0d441d11-3241-45da-8bcf-c95636d3efa9" (UID: "0d441d11-3241-45da-8bcf-c95636d3efa9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.533207 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d441d11-3241-45da-8bcf-c95636d3efa9-scripts" (OuterVolumeSpecName: "scripts") pod "0d441d11-3241-45da-8bcf-c95636d3efa9" (UID: "0d441d11-3241-45da-8bcf-c95636d3efa9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.535678 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "0d441d11-3241-45da-8bcf-c95636d3efa9" (UID: "0d441d11-3241-45da-8bcf-c95636d3efa9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.537418 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d441d11-3241-45da-8bcf-c95636d3efa9" (UID: "0d441d11-3241-45da-8bcf-c95636d3efa9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.614143 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p9dmr"] Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.617343 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d441d11-3241-45da-8bcf-c95636d3efa9-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.617390 4914 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0d441d11-3241-45da-8bcf-c95636d3efa9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.617406 4914 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.617418 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.617431 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbzql\" (UniqueName: \"kubernetes.io/projected/0d441d11-3241-45da-8bcf-c95636d3efa9-kube-api-access-gbzql\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.617440 4914 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0d441d11-3241-45da-8bcf-c95636d3efa9-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:48 crc kubenswrapper[4914]: I0127 14:03:48.617512 4914 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0d441d11-3241-45da-8bcf-c95636d3efa9-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:48 crc kubenswrapper[4914]: W0127 14:03:48.620997 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59fcd83f_be4c_4d89_9427_92025120e63f.slice/crio-874d6c055f82382eef885b6676e95fe2f28982abc964366cc4091ed3b1d60f7c WatchSource:0}: Error finding container 874d6c055f82382eef885b6676e95fe2f28982abc964366cc4091ed3b1d60f7c: Status 404 returned error can't find the container with id 874d6c055f82382eef885b6676e95fe2f28982abc964366cc4091ed3b1d60f7c Jan 27 14:03:49 crc kubenswrapper[4914]: I0127 14:03:49.106280 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4f8dg" event={"ID":"0d441d11-3241-45da-8bcf-c95636d3efa9","Type":"ContainerDied","Data":"15435b9ffad8aa133e4fccf86678c7dbba45ea69be86758edf2ab07e0e601df4"} Jan 27 14:03:49 crc kubenswrapper[4914]: I0127 14:03:49.106655 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15435b9ffad8aa133e4fccf86678c7dbba45ea69be86758edf2ab07e0e601df4" Jan 27 14:03:49 crc kubenswrapper[4914]: I0127 14:03:49.106355 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4f8dg" Jan 27 14:03:49 crc kubenswrapper[4914]: I0127 14:03:49.108530 4914 generic.go:334] "Generic (PLEG): container finished" podID="59fcd83f-be4c-4d89-9427-92025120e63f" containerID="288e00b9021032445478efb7c4c5fdf89de3954d4aa42b5702d76ba9c7b60873" exitCode=0 Jan 27 14:03:49 crc kubenswrapper[4914]: I0127 14:03:49.108565 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p9dmr" event={"ID":"59fcd83f-be4c-4d89-9427-92025120e63f","Type":"ContainerDied","Data":"288e00b9021032445478efb7c4c5fdf89de3954d4aa42b5702d76ba9c7b60873"} Jan 27 14:03:49 crc kubenswrapper[4914]: I0127 14:03:49.108589 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p9dmr" event={"ID":"59fcd83f-be4c-4d89-9427-92025120e63f","Type":"ContainerStarted","Data":"874d6c055f82382eef885b6676e95fe2f28982abc964366cc4091ed3b1d60f7c"} Jan 27 14:03:50 crc kubenswrapper[4914]: I0127 14:03:50.346476 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:50 crc kubenswrapper[4914]: I0127 14:03:50.352341 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62cc5d9e-afad-4888-9e8f-c57f7b185d2b-etc-swift\") pod \"swift-storage-0\" (UID: \"62cc5d9e-afad-4888-9e8f-c57f7b185d2b\") " pod="openstack/swift-storage-0" Jan 27 14:03:50 crc kubenswrapper[4914]: I0127 14:03:50.488280 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p9dmr" Jan 27 14:03:50 crc kubenswrapper[4914]: I0127 14:03:50.548769 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59fcd83f-be4c-4d89-9427-92025120e63f-operator-scripts\") pod \"59fcd83f-be4c-4d89-9427-92025120e63f\" (UID: \"59fcd83f-be4c-4d89-9427-92025120e63f\") " Jan 27 14:03:50 crc kubenswrapper[4914]: I0127 14:03:50.548906 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcsjg\" (UniqueName: \"kubernetes.io/projected/59fcd83f-be4c-4d89-9427-92025120e63f-kube-api-access-zcsjg\") pod \"59fcd83f-be4c-4d89-9427-92025120e63f\" (UID: \"59fcd83f-be4c-4d89-9427-92025120e63f\") " Jan 27 14:03:50 crc kubenswrapper[4914]: I0127 14:03:50.549642 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59fcd83f-be4c-4d89-9427-92025120e63f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59fcd83f-be4c-4d89-9427-92025120e63f" (UID: "59fcd83f-be4c-4d89-9427-92025120e63f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:03:50 crc kubenswrapper[4914]: I0127 14:03:50.553661 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59fcd83f-be4c-4d89-9427-92025120e63f-kube-api-access-zcsjg" (OuterVolumeSpecName: "kube-api-access-zcsjg") pod "59fcd83f-be4c-4d89-9427-92025120e63f" (UID: "59fcd83f-be4c-4d89-9427-92025120e63f"). InnerVolumeSpecName "kube-api-access-zcsjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:03:50 crc kubenswrapper[4914]: I0127 14:03:50.555045 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 14:03:50 crc kubenswrapper[4914]: I0127 14:03:50.651127 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59fcd83f-be4c-4d89-9427-92025120e63f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:50 crc kubenswrapper[4914]: I0127 14:03:50.651441 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcsjg\" (UniqueName: \"kubernetes.io/projected/59fcd83f-be4c-4d89-9427-92025120e63f-kube-api-access-zcsjg\") on node \"crc\" DevicePath \"\"" Jan 27 14:03:50 crc kubenswrapper[4914]: I0127 14:03:50.657209 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 14:03:51 crc kubenswrapper[4914]: I0127 14:03:51.098655 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 14:03:51 crc kubenswrapper[4914]: W0127 14:03:51.108217 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62cc5d9e_afad_4888_9e8f_c57f7b185d2b.slice/crio-28bb3ff6bbc701e97f3aa6406aa1ace9f70882b466ff6968d78c2c9247117a73 WatchSource:0}: Error finding container 28bb3ff6bbc701e97f3aa6406aa1ace9f70882b466ff6968d78c2c9247117a73: Status 404 returned error can't find the container with id 28bb3ff6bbc701e97f3aa6406aa1ace9f70882b466ff6968d78c2c9247117a73 Jan 27 14:03:51 crc kubenswrapper[4914]: I0127 14:03:51.125483 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p9dmr" Jan 27 14:03:51 crc kubenswrapper[4914]: I0127 14:03:51.125480 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p9dmr" event={"ID":"59fcd83f-be4c-4d89-9427-92025120e63f","Type":"ContainerDied","Data":"874d6c055f82382eef885b6676e95fe2f28982abc964366cc4091ed3b1d60f7c"} Jan 27 14:03:51 crc kubenswrapper[4914]: I0127 14:03:51.125631 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="874d6c055f82382eef885b6676e95fe2f28982abc964366cc4091ed3b1d60f7c" Jan 27 14:03:51 crc kubenswrapper[4914]: I0127 14:03:51.127400 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62cc5d9e-afad-4888-9e8f-c57f7b185d2b","Type":"ContainerStarted","Data":"28bb3ff6bbc701e97f3aa6406aa1ace9f70882b466ff6968d78c2c9247117a73"} Jan 27 14:03:54 crc kubenswrapper[4914]: I0127 14:03:54.037654 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-p9dmr"] Jan 27 14:03:54 crc kubenswrapper[4914]: I0127 14:03:54.049777 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-p9dmr"] Jan 27 14:03:54 crc kubenswrapper[4914]: I0127 14:03:54.149936 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62cc5d9e-afad-4888-9e8f-c57f7b185d2b","Type":"ContainerStarted","Data":"197e96ac0aa387c152746576274f332fcf2946341ad93bf6f717daa4a750496d"} Jan 27 14:03:54 crc kubenswrapper[4914]: I0127 14:03:54.149977 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62cc5d9e-afad-4888-9e8f-c57f7b185d2b","Type":"ContainerStarted","Data":"02cb5030833ed30578ac4fa5a354f890ee2dfe2aa1b87323ae16d935accce516"} Jan 27 14:03:54 crc kubenswrapper[4914]: I0127 14:03:54.149987 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62cc5d9e-afad-4888-9e8f-c57f7b185d2b","Type":"ContainerStarted","Data":"1bda38ffdcb35e7b3d1db832ede9a822d0b8f274207d9234b943f3d2aa897b8c"} Jan 27 14:03:54 crc kubenswrapper[4914]: I0127 14:03:54.303753 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59fcd83f-be4c-4d89-9427-92025120e63f" path="/var/lib/kubelet/pods/59fcd83f-be4c-4d89-9427-92025120e63f/volumes" Jan 27 14:03:56 crc kubenswrapper[4914]: I0127 14:03:56.167420 4914 generic.go:334] "Generic (PLEG): container finished" podID="9dc0242e-0a62-4f1c-b978-00f6b2651429" containerID="310f0d8ee69d8348287d42e688e3c34bcd22a0c014b71fa45e4908f7c3f9dc7b" exitCode=0 Jan 27 14:03:56 crc kubenswrapper[4914]: I0127 14:03:56.167531 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9dc0242e-0a62-4f1c-b978-00f6b2651429","Type":"ContainerDied","Data":"310f0d8ee69d8348287d42e688e3c34bcd22a0c014b71fa45e4908f7c3f9dc7b"} Jan 27 14:03:56 crc kubenswrapper[4914]: I0127 14:03:56.182704 4914 generic.go:334] "Generic (PLEG): container finished" podID="ead132f0-586e-402b-87bb-f7109396498d" containerID="60868d2a16744a7d8e12849ab0667e58062dc138a6494289bb33b1915cc1001f" exitCode=0 Jan 27 14:03:56 crc kubenswrapper[4914]: I0127 14:03:56.182749 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ead132f0-586e-402b-87bb-f7109396498d","Type":"ContainerDied","Data":"60868d2a16744a7d8e12849ab0667e58062dc138a6494289bb33b1915cc1001f"} Jan 27 14:03:56 crc kubenswrapper[4914]: I0127 14:03:56.271583 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fwgsp" podUID="16d7aef1-746e-4166-a82d-e40371ebc96c" containerName="ovn-controller" probeResult="failure" output=< Jan 27 14:03:56 crc kubenswrapper[4914]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 14:03:56 crc kubenswrapper[4914]: > Jan 27 14:03:56 crc kubenswrapper[4914]: I0127 14:03:56.291322 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:03:59 crc kubenswrapper[4914]: I0127 14:03:59.047845 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mpmhb"] Jan 27 14:03:59 crc kubenswrapper[4914]: E0127 14:03:59.049659 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d441d11-3241-45da-8bcf-c95636d3efa9" containerName="swift-ring-rebalance" Jan 27 14:03:59 crc kubenswrapper[4914]: I0127 14:03:59.049766 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d441d11-3241-45da-8bcf-c95636d3efa9" containerName="swift-ring-rebalance" Jan 27 14:03:59 crc kubenswrapper[4914]: E0127 14:03:59.049889 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59fcd83f-be4c-4d89-9427-92025120e63f" containerName="mariadb-account-create-update" Jan 27 14:03:59 crc kubenswrapper[4914]: I0127 14:03:59.049997 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="59fcd83f-be4c-4d89-9427-92025120e63f" containerName="mariadb-account-create-update" Jan 27 14:03:59 crc kubenswrapper[4914]: I0127 14:03:59.050283 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d441d11-3241-45da-8bcf-c95636d3efa9" containerName="swift-ring-rebalance" Jan 27 14:03:59 crc kubenswrapper[4914]: I0127 14:03:59.050405 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="59fcd83f-be4c-4d89-9427-92025120e63f" containerName="mariadb-account-create-update" Jan 27 14:03:59 crc kubenswrapper[4914]: I0127 14:03:59.051191 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mpmhb" Jan 27 14:03:59 crc kubenswrapper[4914]: I0127 14:03:59.053282 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 14:03:59 crc kubenswrapper[4914]: I0127 14:03:59.060913 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mpmhb"] Jan 27 14:03:59 crc kubenswrapper[4914]: I0127 14:03:59.193094 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a2b976-d911-40b1-a016-7c8cf2df1a19-operator-scripts\") pod \"root-account-create-update-mpmhb\" (UID: \"90a2b976-d911-40b1-a016-7c8cf2df1a19\") " pod="openstack/root-account-create-update-mpmhb" Jan 27 14:03:59 crc kubenswrapper[4914]: I0127 14:03:59.193147 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5jl5\" (UniqueName: \"kubernetes.io/projected/90a2b976-d911-40b1-a016-7c8cf2df1a19-kube-api-access-r5jl5\") pod \"root-account-create-update-mpmhb\" (UID: \"90a2b976-d911-40b1-a016-7c8cf2df1a19\") " pod="openstack/root-account-create-update-mpmhb" Jan 27 14:03:59 crc kubenswrapper[4914]: I0127 14:03:59.294624 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5jl5\" (UniqueName: \"kubernetes.io/projected/90a2b976-d911-40b1-a016-7c8cf2df1a19-kube-api-access-r5jl5\") pod \"root-account-create-update-mpmhb\" (UID: \"90a2b976-d911-40b1-a016-7c8cf2df1a19\") " pod="openstack/root-account-create-update-mpmhb" Jan 27 14:03:59 crc kubenswrapper[4914]: I0127 14:03:59.295000 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a2b976-d911-40b1-a016-7c8cf2df1a19-operator-scripts\") pod \"root-account-create-update-mpmhb\" (UID: \"90a2b976-d911-40b1-a016-7c8cf2df1a19\") " pod="openstack/root-account-create-update-mpmhb" Jan 27 14:03:59 crc kubenswrapper[4914]: I0127 14:03:59.297060 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a2b976-d911-40b1-a016-7c8cf2df1a19-operator-scripts\") pod \"root-account-create-update-mpmhb\" (UID: \"90a2b976-d911-40b1-a016-7c8cf2df1a19\") " pod="openstack/root-account-create-update-mpmhb" Jan 27 14:03:59 crc kubenswrapper[4914]: I0127 14:03:59.315397 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5jl5\" (UniqueName: \"kubernetes.io/projected/90a2b976-d911-40b1-a016-7c8cf2df1a19-kube-api-access-r5jl5\") pod \"root-account-create-update-mpmhb\" (UID: \"90a2b976-d911-40b1-a016-7c8cf2df1a19\") " pod="openstack/root-account-create-update-mpmhb" Jan 27 14:03:59 crc kubenswrapper[4914]: I0127 14:03:59.368978 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mpmhb" Jan 27 14:04:01 crc kubenswrapper[4914]: I0127 14:04:01.260253 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fwgsp" podUID="16d7aef1-746e-4166-a82d-e40371ebc96c" containerName="ovn-controller" probeResult="failure" output=< Jan 27 14:04:01 crc kubenswrapper[4914]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 14:04:01 crc kubenswrapper[4914]: > Jan 27 14:04:01 crc kubenswrapper[4914]: I0127 14:04:01.783670 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mpmhb"] Jan 27 14:04:01 crc kubenswrapper[4914]: W0127 14:04:01.787911 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90a2b976_d911_40b1_a016_7c8cf2df1a19.slice/crio-da3dcfadfb537f82c06b341c10d64db96806b3109f16177a1b286dae325ae5b3 WatchSource:0}: Error finding container da3dcfadfb537f82c06b341c10d64db96806b3109f16177a1b286dae325ae5b3: Status 404 returned error can't find the container with id da3dcfadfb537f82c06b341c10d64db96806b3109f16177a1b286dae325ae5b3 Jan 27 14:04:02 crc kubenswrapper[4914]: I0127 14:04:02.234200 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9dc0242e-0a62-4f1c-b978-00f6b2651429","Type":"ContainerStarted","Data":"72a0e7ef894659f6e574eb6b278ddb50e7091ad93d45138a52a28b1bd5765080"} Jan 27 14:04:02 crc kubenswrapper[4914]: I0127 14:04:02.234615 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 14:04:02 crc kubenswrapper[4914]: I0127 14:04:02.235643 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ead132f0-586e-402b-87bb-f7109396498d","Type":"ContainerStarted","Data":"bdbe6000f255755bfcd1719879a61fef64524a82bf179d78758b04a5d7b435b8"} Jan 27 14:04:02 crc kubenswrapper[4914]: I0127 14:04:02.236334 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:04:02 crc kubenswrapper[4914]: I0127 14:04:02.243419 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62cc5d9e-afad-4888-9e8f-c57f7b185d2b","Type":"ContainerStarted","Data":"9136d18aa6307f1845572253a48d31c1ddf0cac20a059614b5a5e2805a4d927b"} Jan 27 14:04:02 crc kubenswrapper[4914]: I0127 14:04:02.244925 4914 generic.go:334] "Generic (PLEG): container finished" podID="90a2b976-d911-40b1-a016-7c8cf2df1a19" containerID="25aff3a107df0be6e0642f5ee3e72920aadea0fcf62e8deb0f5c751a8e96e08b" exitCode=0 Jan 27 14:04:02 crc kubenswrapper[4914]: I0127 14:04:02.244984 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mpmhb" event={"ID":"90a2b976-d911-40b1-a016-7c8cf2df1a19","Type":"ContainerDied","Data":"25aff3a107df0be6e0642f5ee3e72920aadea0fcf62e8deb0f5c751a8e96e08b"} Jan 27 14:04:02 crc kubenswrapper[4914]: I0127 14:04:02.245011 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mpmhb" event={"ID":"90a2b976-d911-40b1-a016-7c8cf2df1a19","Type":"ContainerStarted","Data":"da3dcfadfb537f82c06b341c10d64db96806b3109f16177a1b286dae325ae5b3"} Jan 27 14:04:02 crc kubenswrapper[4914]: I0127 14:04:02.246234 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-89vdt" event={"ID":"54c4cebf-28fc-49cf-93e1-c10215cd7c85","Type":"ContainerStarted","Data":"15f004f3ed47379357a42bf0e685c29cb5d444b5b17798ee0c3b61450a73cb5e"} Jan 27 14:04:02 crc kubenswrapper[4914]: I0127 14:04:02.327137 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=56.073858999 podStartE2EDuration="1m6.327122532s" podCreationTimestamp="2026-01-27 14:02:56 +0000 UTC" firstStartedPulling="2026-01-27 14:03:11.457476972 +0000 UTC m=+1149.769827057" lastFinishedPulling="2026-01-27 14:03:21.710740495 +0000 UTC m=+1160.023090590" observedRunningTime="2026-01-27 14:04:02.315060652 +0000 UTC m=+1200.627410727" watchObservedRunningTime="2026-01-27 14:04:02.327122532 +0000 UTC m=+1200.639472617" Jan 27 14:04:02 crc kubenswrapper[4914]: I0127 14:04:02.328619 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=55.951524952 podStartE2EDuration="1m6.328609902s" podCreationTimestamp="2026-01-27 14:02:56 +0000 UTC" firstStartedPulling="2026-01-27 14:03:11.172296903 +0000 UTC m=+1149.484646988" lastFinishedPulling="2026-01-27 14:03:21.549381853 +0000 UTC m=+1159.861731938" observedRunningTime="2026-01-27 14:04:02.272213168 +0000 UTC m=+1200.584563253" watchObservedRunningTime="2026-01-27 14:04:02.328609902 +0000 UTC m=+1200.640959987" Jan 27 14:04:02 crc kubenswrapper[4914]: I0127 14:04:02.371214 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-89vdt" podStartSLOduration=2.158691118 podStartE2EDuration="16.371197249s" podCreationTimestamp="2026-01-27 14:03:46 +0000 UTC" firstStartedPulling="2026-01-27 14:03:47.167427962 +0000 UTC m=+1185.479778047" lastFinishedPulling="2026-01-27 14:04:01.379934093 +0000 UTC m=+1199.692284178" observedRunningTime="2026-01-27 14:04:02.358704986 +0000 UTC m=+1200.671055081" watchObservedRunningTime="2026-01-27 14:04:02.371197249 +0000 UTC m=+1200.683547334" Jan 27 14:04:05 crc kubenswrapper[4914]: I0127 14:04:05.005123 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mpmhb" Jan 27 14:04:05 crc kubenswrapper[4914]: I0127 14:04:05.118064 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5jl5\" (UniqueName: \"kubernetes.io/projected/90a2b976-d911-40b1-a016-7c8cf2df1a19-kube-api-access-r5jl5\") pod \"90a2b976-d911-40b1-a016-7c8cf2df1a19\" (UID: \"90a2b976-d911-40b1-a016-7c8cf2df1a19\") " Jan 27 14:04:05 crc kubenswrapper[4914]: I0127 14:04:05.118154 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a2b976-d911-40b1-a016-7c8cf2df1a19-operator-scripts\") pod \"90a2b976-d911-40b1-a016-7c8cf2df1a19\" (UID: \"90a2b976-d911-40b1-a016-7c8cf2df1a19\") " Jan 27 14:04:05 crc kubenswrapper[4914]: I0127 14:04:05.118868 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90a2b976-d911-40b1-a016-7c8cf2df1a19-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90a2b976-d911-40b1-a016-7c8cf2df1a19" (UID: "90a2b976-d911-40b1-a016-7c8cf2df1a19"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:05 crc kubenswrapper[4914]: I0127 14:04:05.133993 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a2b976-d911-40b1-a016-7c8cf2df1a19-kube-api-access-r5jl5" (OuterVolumeSpecName: "kube-api-access-r5jl5") pod "90a2b976-d911-40b1-a016-7c8cf2df1a19" (UID: "90a2b976-d911-40b1-a016-7c8cf2df1a19"). InnerVolumeSpecName "kube-api-access-r5jl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:05 crc kubenswrapper[4914]: I0127 14:04:05.220229 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5jl5\" (UniqueName: \"kubernetes.io/projected/90a2b976-d911-40b1-a016-7c8cf2df1a19-kube-api-access-r5jl5\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:05 crc kubenswrapper[4914]: I0127 14:04:05.220268 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a2b976-d911-40b1-a016-7c8cf2df1a19-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:05 crc kubenswrapper[4914]: I0127 14:04:05.267869 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mpmhb" event={"ID":"90a2b976-d911-40b1-a016-7c8cf2df1a19","Type":"ContainerDied","Data":"da3dcfadfb537f82c06b341c10d64db96806b3109f16177a1b286dae325ae5b3"} Jan 27 14:04:05 crc kubenswrapper[4914]: I0127 14:04:05.267909 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da3dcfadfb537f82c06b341c10d64db96806b3109f16177a1b286dae325ae5b3" Jan 27 14:04:05 crc kubenswrapper[4914]: I0127 14:04:05.267969 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mpmhb" Jan 27 14:04:05 crc kubenswrapper[4914]: I0127 14:04:05.280462 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62cc5d9e-afad-4888-9e8f-c57f7b185d2b","Type":"ContainerStarted","Data":"4f5e310a4bfb6330910c38b9116a2248cdefb00e68062c019ed5117d8f48222f"} Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.272602 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fwgsp" podUID="16d7aef1-746e-4166-a82d-e40371ebc96c" containerName="ovn-controller" probeResult="failure" output=< Jan 27 14:04:06 crc kubenswrapper[4914]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 14:04:06 crc kubenswrapper[4914]: > Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.290613 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-d2xdr" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.319677 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62cc5d9e-afad-4888-9e8f-c57f7b185d2b","Type":"ContainerStarted","Data":"8ac2afaba2a0819bc406e32bbdf0ebef983b82e8bd325539634644d1867e557b"} Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.319716 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62cc5d9e-afad-4888-9e8f-c57f7b185d2b","Type":"ContainerStarted","Data":"fc89258def261d76ed1f93d7d97edb0f4094f66a609c8571a48dc9d1981f64a7"} Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.319725 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62cc5d9e-afad-4888-9e8f-c57f7b185d2b","Type":"ContainerStarted","Data":"6991ffe677d53801334a6017b09e4d772e72e902539dd2906164aa2a2414badd"} Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.500258 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fwgsp-config-j7p6s"] Jan 27 14:04:06 crc kubenswrapper[4914]: E0127 14:04:06.500599 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a2b976-d911-40b1-a016-7c8cf2df1a19" containerName="mariadb-account-create-update" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.500617 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a2b976-d911-40b1-a016-7c8cf2df1a19" containerName="mariadb-account-create-update" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.500749 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a2b976-d911-40b1-a016-7c8cf2df1a19" containerName="mariadb-account-create-update" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.506733 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.508635 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.536493 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fwgsp-config-j7p6s"] Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.641450 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-run\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.641675 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjvn\" (UniqueName: \"kubernetes.io/projected/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-kube-api-access-chjvn\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.641718 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-run-ovn\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.641808 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-additional-scripts\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.641915 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-log-ovn\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.641954 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-scripts\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.742867 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-log-ovn\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.742915 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-scripts\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.742957 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-run\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.743048 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chjvn\" (UniqueName: \"kubernetes.io/projected/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-kube-api-access-chjvn\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.743072 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-run-ovn\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.743101 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-additional-scripts\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.743542 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-log-ovn\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.743603 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-run-ovn\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.743658 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-run\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.744077 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-additional-scripts\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.745351 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-scripts\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.774126 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chjvn\" (UniqueName: \"kubernetes.io/projected/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-kube-api-access-chjvn\") pod \"ovn-controller-fwgsp-config-j7p6s\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:06 crc kubenswrapper[4914]: I0127 14:04:06.824098 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:07 crc kubenswrapper[4914]: I0127 14:04:07.322330 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62cc5d9e-afad-4888-9e8f-c57f7b185d2b","Type":"ContainerStarted","Data":"32f0087e24a1606c6ac5b4940227820fc876749de467882d53693f527918b31e"} Jan 27 14:04:07 crc kubenswrapper[4914]: I0127 14:04:07.397453 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fwgsp-config-j7p6s"] Jan 27 14:04:08 crc kubenswrapper[4914]: I0127 14:04:08.355383 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62cc5d9e-afad-4888-9e8f-c57f7b185d2b","Type":"ContainerStarted","Data":"ffa6a09d34035b4155e54e1afbc73162f310d250d5159f3be6f037d83ee1b08e"} Jan 27 14:04:08 crc kubenswrapper[4914]: I0127 14:04:08.355758 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62cc5d9e-afad-4888-9e8f-c57f7b185d2b","Type":"ContainerStarted","Data":"f999507c6ae74ae6735a13be041a3d2b611d9cdcf6fe2223e68ee9f66051a21b"} Jan 27 14:04:08 crc kubenswrapper[4914]: I0127 14:04:08.355774 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62cc5d9e-afad-4888-9e8f-c57f7b185d2b","Type":"ContainerStarted","Data":"22efd47ba4d2340d053404519cea28fa98b8e6e99d319fbdbbbf9a770bf23e2c"} Jan 27 14:04:08 crc kubenswrapper[4914]: I0127 14:04:08.355785 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62cc5d9e-afad-4888-9e8f-c57f7b185d2b","Type":"ContainerStarted","Data":"55ac04749ba1d3186686f11ec71057f1fa1c4bd9327c935351266cfde663dd81"} Jan 27 14:04:08 crc kubenswrapper[4914]: I0127 14:04:08.355797 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62cc5d9e-afad-4888-9e8f-c57f7b185d2b","Type":"ContainerStarted","Data":"c03865aa3c0d49a2828e64e53c6485ef35b01cad3c5623baa20f8edd797bb7fe"} Jan 27 14:04:08 crc kubenswrapper[4914]: I0127 14:04:08.357256 4914 generic.go:334] "Generic (PLEG): container finished" podID="b4c4082a-7e35-4d65-a0b4-3d215d0830f9" containerID="cff9f1459685a45bc8c8fffc867e42a6e36b8d78a0fb5b986b715debedd9422f" exitCode=0 Jan 27 14:04:08 crc kubenswrapper[4914]: I0127 14:04:08.357297 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fwgsp-config-j7p6s" event={"ID":"b4c4082a-7e35-4d65-a0b4-3d215d0830f9","Type":"ContainerDied","Data":"cff9f1459685a45bc8c8fffc867e42a6e36b8d78a0fb5b986b715debedd9422f"} Jan 27 14:04:08 crc kubenswrapper[4914]: I0127 14:04:08.357323 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fwgsp-config-j7p6s" event={"ID":"b4c4082a-7e35-4d65-a0b4-3d215d0830f9","Type":"ContainerStarted","Data":"e380123cb1241ac72138d31a846583be204b884ae703b41a99b11f2124821ec6"} Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.371129 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62cc5d9e-afad-4888-9e8f-c57f7b185d2b","Type":"ContainerStarted","Data":"2b2b36340f7a112818a8e7bde1baedfc5f28970eb602389cdeca44dfe290ab98"} Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.413059 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.540154004 podStartE2EDuration="36.413038974s" podCreationTimestamp="2026-01-27 14:03:33 +0000 UTC" firstStartedPulling="2026-01-27 14:03:51.111575956 +0000 UTC m=+1189.423926041" lastFinishedPulling="2026-01-27 14:04:06.984460926 +0000 UTC m=+1205.296811011" observedRunningTime="2026-01-27 14:04:09.409118937 +0000 UTC m=+1207.721469042" watchObservedRunningTime="2026-01-27 14:04:09.413038974 +0000 UTC m=+1207.725389059" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.695039 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.709613 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-68hhl"] Jan 27 14:04:09 crc kubenswrapper[4914]: E0127 14:04:09.709946 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c4082a-7e35-4d65-a0b4-3d215d0830f9" containerName="ovn-config" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.709967 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c4082a-7e35-4d65-a0b4-3d215d0830f9" containerName="ovn-config" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.710132 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c4082a-7e35-4d65-a0b4-3d215d0830f9" containerName="ovn-config" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.713145 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.720393 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.740845 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-68hhl"] Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.796258 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-run\") pod \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.796307 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-additional-scripts\") pod \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.796354 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-log-ovn\") pod \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.796421 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-run-ovn\") pod \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.796441 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chjvn\" (UniqueName: \"kubernetes.io/projected/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-kube-api-access-chjvn\") pod \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.796460 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-scripts\") pod \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\" (UID: \"b4c4082a-7e35-4d65-a0b4-3d215d0830f9\") " Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.796570 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b4c4082a-7e35-4d65-a0b4-3d215d0830f9" (UID: "b4c4082a-7e35-4d65-a0b4-3d215d0830f9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.796599 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b4c4082a-7e35-4d65-a0b4-3d215d0830f9" (UID: "b4c4082a-7e35-4d65-a0b4-3d215d0830f9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.796604 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-run" (OuterVolumeSpecName: "var-run") pod "b4c4082a-7e35-4d65-a0b4-3d215d0830f9" (UID: "b4c4082a-7e35-4d65-a0b4-3d215d0830f9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.796917 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.796976 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.797007 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-config\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.797033 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.797083 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms7fd\" (UniqueName: \"kubernetes.io/projected/9ffe4b69-8204-4cdf-9d33-7017d190606b-kube-api-access-ms7fd\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.797114 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.797159 4914 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.797170 4914 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.797177 4914 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.797688 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b4c4082a-7e35-4d65-a0b4-3d215d0830f9" (UID: "b4c4082a-7e35-4d65-a0b4-3d215d0830f9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.798763 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-scripts" (OuterVolumeSpecName: "scripts") pod "b4c4082a-7e35-4d65-a0b4-3d215d0830f9" (UID: "b4c4082a-7e35-4d65-a0b4-3d215d0830f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.811193 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-kube-api-access-chjvn" (OuterVolumeSpecName: "kube-api-access-chjvn") pod "b4c4082a-7e35-4d65-a0b4-3d215d0830f9" (UID: "b4c4082a-7e35-4d65-a0b4-3d215d0830f9"). InnerVolumeSpecName "kube-api-access-chjvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.899046 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.899112 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms7fd\" (UniqueName: \"kubernetes.io/projected/9ffe4b69-8204-4cdf-9d33-7017d190606b-kube-api-access-ms7fd\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.899163 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.899216 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.899274 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.899312 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-config\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.899378 4914 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.899392 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chjvn\" (UniqueName: \"kubernetes.io/projected/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-kube-api-access-chjvn\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.899405 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4c4082a-7e35-4d65-a0b4-3d215d0830f9-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.900233 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.900264 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.900336 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.900462 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.900477 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-config\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:09 crc kubenswrapper[4914]: I0127 14:04:09.928625 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms7fd\" (UniqueName: \"kubernetes.io/projected/9ffe4b69-8204-4cdf-9d33-7017d190606b-kube-api-access-ms7fd\") pod \"dnsmasq-dns-8467b54bcc-68hhl\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:10 crc kubenswrapper[4914]: I0127 14:04:10.031582 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:10 crc kubenswrapper[4914]: I0127 14:04:10.379768 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fwgsp-config-j7p6s" event={"ID":"b4c4082a-7e35-4d65-a0b4-3d215d0830f9","Type":"ContainerDied","Data":"e380123cb1241ac72138d31a846583be204b884ae703b41a99b11f2124821ec6"} Jan 27 14:04:10 crc kubenswrapper[4914]: I0127 14:04:10.380053 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e380123cb1241ac72138d31a846583be204b884ae703b41a99b11f2124821ec6" Jan 27 14:04:10 crc kubenswrapper[4914]: I0127 14:04:10.379797 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fwgsp-config-j7p6s" Jan 27 14:04:10 crc kubenswrapper[4914]: I0127 14:04:10.566516 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-68hhl"] Jan 27 14:04:10 crc kubenswrapper[4914]: W0127 14:04:10.577970 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ffe4b69_8204_4cdf_9d33_7017d190606b.slice/crio-2a8b9550acad341df08d5eee279c6dc4cbcc637dfdf65af5ed3d0a410795413f WatchSource:0}: Error finding container 2a8b9550acad341df08d5eee279c6dc4cbcc637dfdf65af5ed3d0a410795413f: Status 404 returned error can't find the container with id 2a8b9550acad341df08d5eee279c6dc4cbcc637dfdf65af5ed3d0a410795413f Jan 27 14:04:10 crc kubenswrapper[4914]: I0127 14:04:10.827181 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-fwgsp-config-j7p6s"] Jan 27 14:04:10 crc kubenswrapper[4914]: I0127 14:04:10.832927 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-fwgsp-config-j7p6s"] Jan 27 14:04:11 crc kubenswrapper[4914]: I0127 14:04:11.263903 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-fwgsp" Jan 27 14:04:11 crc kubenswrapper[4914]: I0127 14:04:11.394504 4914 generic.go:334] "Generic (PLEG): container finished" podID="54c4cebf-28fc-49cf-93e1-c10215cd7c85" containerID="15f004f3ed47379357a42bf0e685c29cb5d444b5b17798ee0c3b61450a73cb5e" exitCode=0 Jan 27 14:04:11 crc kubenswrapper[4914]: I0127 14:04:11.394591 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-89vdt" event={"ID":"54c4cebf-28fc-49cf-93e1-c10215cd7c85","Type":"ContainerDied","Data":"15f004f3ed47379357a42bf0e685c29cb5d444b5b17798ee0c3b61450a73cb5e"} Jan 27 14:04:11 crc kubenswrapper[4914]: I0127 14:04:11.401013 4914 generic.go:334] "Generic (PLEG): container finished" podID="9ffe4b69-8204-4cdf-9d33-7017d190606b" containerID="4cd2e865e3123d84dda0c660919c7b8ac4e92e593790a7475b54b030b8c34244" exitCode=0 Jan 27 14:04:11 crc kubenswrapper[4914]: I0127 14:04:11.401058 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" event={"ID":"9ffe4b69-8204-4cdf-9d33-7017d190606b","Type":"ContainerDied","Data":"4cd2e865e3123d84dda0c660919c7b8ac4e92e593790a7475b54b030b8c34244"} Jan 27 14:04:11 crc kubenswrapper[4914]: I0127 14:04:11.401101 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" event={"ID":"9ffe4b69-8204-4cdf-9d33-7017d190606b","Type":"ContainerStarted","Data":"2a8b9550acad341df08d5eee279c6dc4cbcc637dfdf65af5ed3d0a410795413f"} Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.306943 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4c4082a-7e35-4d65-a0b4-3d215d0830f9" path="/var/lib/kubelet/pods/b4c4082a-7e35-4d65-a0b4-3d215d0830f9/volumes" Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.441850 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" event={"ID":"9ffe4b69-8204-4cdf-9d33-7017d190606b","Type":"ContainerStarted","Data":"44bcf754f2ebaf8d77352dd874fb2481d6a8d847674c0c64f2f7881b13133599"} Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.443626 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.469193 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" podStartSLOduration=3.469170609 podStartE2EDuration="3.469170609s" podCreationTimestamp="2026-01-27 14:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:04:12.463868513 +0000 UTC m=+1210.776218618" watchObservedRunningTime="2026-01-27 14:04:12.469170609 +0000 UTC m=+1210.781520704" Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.821394 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-89vdt" Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.842502 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-config-data\") pod \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\" (UID: \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\") " Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.842550 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-db-sync-config-data\") pod \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\" (UID: \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\") " Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.842575 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx74v\" (UniqueName: \"kubernetes.io/projected/54c4cebf-28fc-49cf-93e1-c10215cd7c85-kube-api-access-gx74v\") pod \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\" (UID: \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\") " Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.842623 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-combined-ca-bundle\") pod \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\" (UID: \"54c4cebf-28fc-49cf-93e1-c10215cd7c85\") " Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.857064 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54c4cebf-28fc-49cf-93e1-c10215cd7c85-kube-api-access-gx74v" (OuterVolumeSpecName: "kube-api-access-gx74v") pod "54c4cebf-28fc-49cf-93e1-c10215cd7c85" (UID: "54c4cebf-28fc-49cf-93e1-c10215cd7c85"). InnerVolumeSpecName "kube-api-access-gx74v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.858966 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "54c4cebf-28fc-49cf-93e1-c10215cd7c85" (UID: "54c4cebf-28fc-49cf-93e1-c10215cd7c85"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.888014 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-config-data" (OuterVolumeSpecName: "config-data") pod "54c4cebf-28fc-49cf-93e1-c10215cd7c85" (UID: "54c4cebf-28fc-49cf-93e1-c10215cd7c85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.896003 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54c4cebf-28fc-49cf-93e1-c10215cd7c85" (UID: "54c4cebf-28fc-49cf-93e1-c10215cd7c85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.944103 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.944352 4914 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.946043 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx74v\" (UniqueName: \"kubernetes.io/projected/54c4cebf-28fc-49cf-93e1-c10215cd7c85-kube-api-access-gx74v\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:12 crc kubenswrapper[4914]: I0127 14:04:12.946272 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c4cebf-28fc-49cf-93e1-c10215cd7c85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:13 crc kubenswrapper[4914]: I0127 14:04:13.454615 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-89vdt" Jan 27 14:04:13 crc kubenswrapper[4914]: I0127 14:04:13.454799 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-89vdt" event={"ID":"54c4cebf-28fc-49cf-93e1-c10215cd7c85","Type":"ContainerDied","Data":"2272db3c28e33b6c8036475c014cda67f01001bd191ecc2c053e31521f8356ca"} Jan 27 14:04:13 crc kubenswrapper[4914]: I0127 14:04:13.455096 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2272db3c28e33b6c8036475c014cda67f01001bd191ecc2c053e31521f8356ca" Jan 27 14:04:13 crc kubenswrapper[4914]: I0127 14:04:13.768673 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-68hhl"] Jan 27 14:04:13 crc kubenswrapper[4914]: I0127 14:04:13.816934 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-tc2gg"] Jan 27 14:04:13 crc kubenswrapper[4914]: E0127 14:04:13.817261 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54c4cebf-28fc-49cf-93e1-c10215cd7c85" containerName="glance-db-sync" Jan 27 14:04:13 crc kubenswrapper[4914]: I0127 14:04:13.817278 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="54c4cebf-28fc-49cf-93e1-c10215cd7c85" containerName="glance-db-sync" Jan 27 14:04:13 crc kubenswrapper[4914]: I0127 14:04:13.817414 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="54c4cebf-28fc-49cf-93e1-c10215cd7c85" containerName="glance-db-sync" Jan 27 14:04:13 crc kubenswrapper[4914]: I0127 14:04:13.818291 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:13 crc kubenswrapper[4914]: I0127 14:04:13.861520 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-tc2gg"] Jan 27 14:04:13 crc kubenswrapper[4914]: I0127 14:04:13.959475 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:13 crc kubenswrapper[4914]: I0127 14:04:13.959555 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:13 crc kubenswrapper[4914]: I0127 14:04:13.959592 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-config\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:13 crc kubenswrapper[4914]: I0127 14:04:13.959633 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:13 crc kubenswrapper[4914]: I0127 14:04:13.959674 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7hzt\" (UniqueName: \"kubernetes.io/projected/829a344f-0e48-4d51-aa5d-31dd6b0fa066-kube-api-access-m7hzt\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:13 crc kubenswrapper[4914]: I0127 14:04:13.959719 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.060938 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.060997 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7hzt\" (UniqueName: \"kubernetes.io/projected/829a344f-0e48-4d51-aa5d-31dd6b0fa066-kube-api-access-m7hzt\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.061021 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.061465 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.061863 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.062103 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.062245 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.062307 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.062337 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-config\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.062875 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.063151 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-config\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.079986 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7hzt\" (UniqueName: \"kubernetes.io/projected/829a344f-0e48-4d51-aa5d-31dd6b0fa066-kube-api-access-m7hzt\") pod \"dnsmasq-dns-56c9bc6f5c-tc2gg\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.135615 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.460466 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" podUID="9ffe4b69-8204-4cdf-9d33-7017d190606b" containerName="dnsmasq-dns" containerID="cri-o://44bcf754f2ebaf8d77352dd874fb2481d6a8d847674c0c64f2f7881b13133599" gracePeriod=10 Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.638363 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-tc2gg"] Jan 27 14:04:14 crc kubenswrapper[4914]: W0127 14:04:14.650165 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod829a344f_0e48_4d51_aa5d_31dd6b0fa066.slice/crio-4cac61a9d8787af8b9e72e805431f56166f2ed807c9119de5323a6d6b5b7aa5b WatchSource:0}: Error finding container 4cac61a9d8787af8b9e72e805431f56166f2ed807c9119de5323a6d6b5b7aa5b: Status 404 returned error can't find the container with id 4cac61a9d8787af8b9e72e805431f56166f2ed807c9119de5323a6d6b5b7aa5b Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.842624 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.974046 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-config\") pod \"9ffe4b69-8204-4cdf-9d33-7017d190606b\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.974125 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-ovsdbserver-nb\") pod \"9ffe4b69-8204-4cdf-9d33-7017d190606b\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.974169 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-dns-svc\") pod \"9ffe4b69-8204-4cdf-9d33-7017d190606b\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.974193 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-dns-swift-storage-0\") pod \"9ffe4b69-8204-4cdf-9d33-7017d190606b\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.974254 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms7fd\" (UniqueName: \"kubernetes.io/projected/9ffe4b69-8204-4cdf-9d33-7017d190606b-kube-api-access-ms7fd\") pod \"9ffe4b69-8204-4cdf-9d33-7017d190606b\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.974288 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-ovsdbserver-sb\") pod \"9ffe4b69-8204-4cdf-9d33-7017d190606b\" (UID: \"9ffe4b69-8204-4cdf-9d33-7017d190606b\") " Jan 27 14:04:14 crc kubenswrapper[4914]: I0127 14:04:14.979181 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ffe4b69-8204-4cdf-9d33-7017d190606b-kube-api-access-ms7fd" (OuterVolumeSpecName: "kube-api-access-ms7fd") pod "9ffe4b69-8204-4cdf-9d33-7017d190606b" (UID: "9ffe4b69-8204-4cdf-9d33-7017d190606b"). InnerVolumeSpecName "kube-api-access-ms7fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.013757 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ffe4b69-8204-4cdf-9d33-7017d190606b" (UID: "9ffe4b69-8204-4cdf-9d33-7017d190606b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.020533 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9ffe4b69-8204-4cdf-9d33-7017d190606b" (UID: "9ffe4b69-8204-4cdf-9d33-7017d190606b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.026159 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-config" (OuterVolumeSpecName: "config") pod "9ffe4b69-8204-4cdf-9d33-7017d190606b" (UID: "9ffe4b69-8204-4cdf-9d33-7017d190606b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.027885 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ffe4b69-8204-4cdf-9d33-7017d190606b" (UID: "9ffe4b69-8204-4cdf-9d33-7017d190606b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.033373 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ffe4b69-8204-4cdf-9d33-7017d190606b" (UID: "9ffe4b69-8204-4cdf-9d33-7017d190606b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.075746 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms7fd\" (UniqueName: \"kubernetes.io/projected/9ffe4b69-8204-4cdf-9d33-7017d190606b-kube-api-access-ms7fd\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.076151 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.076236 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.076302 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.076364 4914 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.076426 4914 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ffe4b69-8204-4cdf-9d33-7017d190606b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.472764 4914 generic.go:334] "Generic (PLEG): container finished" podID="9ffe4b69-8204-4cdf-9d33-7017d190606b" containerID="44bcf754f2ebaf8d77352dd874fb2481d6a8d847674c0c64f2f7881b13133599" exitCode=0 Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.472805 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" event={"ID":"9ffe4b69-8204-4cdf-9d33-7017d190606b","Type":"ContainerDied","Data":"44bcf754f2ebaf8d77352dd874fb2481d6a8d847674c0c64f2f7881b13133599"} Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.473147 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" event={"ID":"9ffe4b69-8204-4cdf-9d33-7017d190606b","Type":"ContainerDied","Data":"2a8b9550acad341df08d5eee279c6dc4cbcc637dfdf65af5ed3d0a410795413f"} Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.473169 4914 scope.go:117] "RemoveContainer" containerID="44bcf754f2ebaf8d77352dd874fb2481d6a8d847674c0c64f2f7881b13133599" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.472935 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-68hhl" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.476170 4914 generic.go:334] "Generic (PLEG): container finished" podID="829a344f-0e48-4d51-aa5d-31dd6b0fa066" containerID="fe3db557a2df98c46ff1ad52453d44b4cfe28ea2a8c85cff9ef04eec00928f0f" exitCode=0 Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.476204 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" event={"ID":"829a344f-0e48-4d51-aa5d-31dd6b0fa066","Type":"ContainerDied","Data":"fe3db557a2df98c46ff1ad52453d44b4cfe28ea2a8c85cff9ef04eec00928f0f"} Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.476250 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" event={"ID":"829a344f-0e48-4d51-aa5d-31dd6b0fa066","Type":"ContainerStarted","Data":"4cac61a9d8787af8b9e72e805431f56166f2ed807c9119de5323a6d6b5b7aa5b"} Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.492353 4914 scope.go:117] "RemoveContainer" containerID="4cd2e865e3123d84dda0c660919c7b8ac4e92e593790a7475b54b030b8c34244" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.603733 4914 scope.go:117] "RemoveContainer" containerID="44bcf754f2ebaf8d77352dd874fb2481d6a8d847674c0c64f2f7881b13133599" Jan 27 14:04:15 crc kubenswrapper[4914]: E0127 14:04:15.604245 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44bcf754f2ebaf8d77352dd874fb2481d6a8d847674c0c64f2f7881b13133599\": container with ID starting with 44bcf754f2ebaf8d77352dd874fb2481d6a8d847674c0c64f2f7881b13133599 not found: ID does not exist" containerID="44bcf754f2ebaf8d77352dd874fb2481d6a8d847674c0c64f2f7881b13133599" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.604275 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44bcf754f2ebaf8d77352dd874fb2481d6a8d847674c0c64f2f7881b13133599"} err="failed to get container status \"44bcf754f2ebaf8d77352dd874fb2481d6a8d847674c0c64f2f7881b13133599\": rpc error: code = NotFound desc = could not find container \"44bcf754f2ebaf8d77352dd874fb2481d6a8d847674c0c64f2f7881b13133599\": container with ID starting with 44bcf754f2ebaf8d77352dd874fb2481d6a8d847674c0c64f2f7881b13133599 not found: ID does not exist" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.604294 4914 scope.go:117] "RemoveContainer" containerID="4cd2e865e3123d84dda0c660919c7b8ac4e92e593790a7475b54b030b8c34244" Jan 27 14:04:15 crc kubenswrapper[4914]: E0127 14:04:15.604602 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cd2e865e3123d84dda0c660919c7b8ac4e92e593790a7475b54b030b8c34244\": container with ID starting with 4cd2e865e3123d84dda0c660919c7b8ac4e92e593790a7475b54b030b8c34244 not found: ID does not exist" containerID="4cd2e865e3123d84dda0c660919c7b8ac4e92e593790a7475b54b030b8c34244" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.604704 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd2e865e3123d84dda0c660919c7b8ac4e92e593790a7475b54b030b8c34244"} err="failed to get container status \"4cd2e865e3123d84dda0c660919c7b8ac4e92e593790a7475b54b030b8c34244\": rpc error: code = NotFound desc = could not find container \"4cd2e865e3123d84dda0c660919c7b8ac4e92e593790a7475b54b030b8c34244\": container with ID starting with 4cd2e865e3123d84dda0c660919c7b8ac4e92e593790a7475b54b030b8c34244 not found: ID does not exist" Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.668987 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-68hhl"] Jan 27 14:04:15 crc kubenswrapper[4914]: I0127 14:04:15.676756 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-68hhl"] Jan 27 14:04:16 crc kubenswrapper[4914]: I0127 14:04:16.306470 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ffe4b69-8204-4cdf-9d33-7017d190606b" path="/var/lib/kubelet/pods/9ffe4b69-8204-4cdf-9d33-7017d190606b/volumes" Jan 27 14:04:16 crc kubenswrapper[4914]: I0127 14:04:16.487608 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" event={"ID":"829a344f-0e48-4d51-aa5d-31dd6b0fa066","Type":"ContainerStarted","Data":"a84ae35c8affb78fd7d98cbe1a18bbb2b9d889010ad08ba417b02585a97e9000"} Jan 27 14:04:16 crc kubenswrapper[4914]: I0127 14:04:16.487793 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:16 crc kubenswrapper[4914]: I0127 14:04:16.509743 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" podStartSLOduration=3.509725543 podStartE2EDuration="3.509725543s" podCreationTimestamp="2026-01-27 14:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:04:16.506866825 +0000 UTC m=+1214.819216920" watchObservedRunningTime="2026-01-27 14:04:16.509725543 +0000 UTC m=+1214.822075628" Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.488081 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.782124 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-rq4q6"] Jan 27 14:04:17 crc kubenswrapper[4914]: E0127 14:04:17.782440 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ffe4b69-8204-4cdf-9d33-7017d190606b" containerName="dnsmasq-dns" Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.782456 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ffe4b69-8204-4cdf-9d33-7017d190606b" containerName="dnsmasq-dns" Jan 27 14:04:17 crc kubenswrapper[4914]: E0127 14:04:17.782476 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ffe4b69-8204-4cdf-9d33-7017d190606b" containerName="init" Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.782483 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ffe4b69-8204-4cdf-9d33-7017d190606b" containerName="init" Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.782647 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ffe4b69-8204-4cdf-9d33-7017d190606b" containerName="dnsmasq-dns" Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.783226 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rq4q6" Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.797247 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rq4q6"] Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.856978 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.873865 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-6bd6p"] Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.874866 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6bd6p" Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.895395 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6bd6p"] Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.901723 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-18f0-account-create-update-v84s5"] Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.903110 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18f0-account-create-update-v84s5" Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.905325 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.926290 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/593fca6b-0503-46d8-8b39-0b6fbf49c883-operator-scripts\") pod \"cinder-db-create-rq4q6\" (UID: \"593fca6b-0503-46d8-8b39-0b6fbf49c883\") " pod="openstack/cinder-db-create-rq4q6" Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.926425 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jr9p\" (UniqueName: \"kubernetes.io/projected/593fca6b-0503-46d8-8b39-0b6fbf49c883-kube-api-access-8jr9p\") pod \"cinder-db-create-rq4q6\" (UID: \"593fca6b-0503-46d8-8b39-0b6fbf49c883\") " pod="openstack/cinder-db-create-rq4q6" Jan 27 14:04:17 crc kubenswrapper[4914]: I0127 14:04:17.952952 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-18f0-account-create-update-v84s5"] Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.020122 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a4a5-account-create-update-dzsqd"] Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.021460 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a4a5-account-create-update-dzsqd" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.028510 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.028556 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94e73cf8-51af-4781-be3c-ef7490061629-operator-scripts\") pod \"cinder-18f0-account-create-update-v84s5\" (UID: \"94e73cf8-51af-4781-be3c-ef7490061629\") " pod="openstack/cinder-18f0-account-create-update-v84s5" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.028593 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722d139b-6c73-46cf-918b-6eec6bcee414-operator-scripts\") pod \"barbican-db-create-6bd6p\" (UID: \"722d139b-6c73-46cf-918b-6eec6bcee414\") " pod="openstack/barbican-db-create-6bd6p" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.028674 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jr9p\" (UniqueName: \"kubernetes.io/projected/593fca6b-0503-46d8-8b39-0b6fbf49c883-kube-api-access-8jr9p\") pod \"cinder-db-create-rq4q6\" (UID: \"593fca6b-0503-46d8-8b39-0b6fbf49c883\") " pod="openstack/cinder-db-create-rq4q6" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.028773 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/593fca6b-0503-46d8-8b39-0b6fbf49c883-operator-scripts\") pod \"cinder-db-create-rq4q6\" (UID: \"593fca6b-0503-46d8-8b39-0b6fbf49c883\") " pod="openstack/cinder-db-create-rq4q6" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.028821 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5zs2\" (UniqueName: \"kubernetes.io/projected/94e73cf8-51af-4781-be3c-ef7490061629-kube-api-access-w5zs2\") pod \"cinder-18f0-account-create-update-v84s5\" (UID: \"94e73cf8-51af-4781-be3c-ef7490061629\") " pod="openstack/cinder-18f0-account-create-update-v84s5" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.028910 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj8lc\" (UniqueName: \"kubernetes.io/projected/722d139b-6c73-46cf-918b-6eec6bcee414-kube-api-access-xj8lc\") pod \"barbican-db-create-6bd6p\" (UID: \"722d139b-6c73-46cf-918b-6eec6bcee414\") " pod="openstack/barbican-db-create-6bd6p" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.029674 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/593fca6b-0503-46d8-8b39-0b6fbf49c883-operator-scripts\") pod \"cinder-db-create-rq4q6\" (UID: \"593fca6b-0503-46d8-8b39-0b6fbf49c883\") " pod="openstack/cinder-db-create-rq4q6" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.031553 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a4a5-account-create-update-dzsqd"] Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.079440 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jr9p\" (UniqueName: \"kubernetes.io/projected/593fca6b-0503-46d8-8b39-0b6fbf49c883-kube-api-access-8jr9p\") pod \"cinder-db-create-rq4q6\" (UID: \"593fca6b-0503-46d8-8b39-0b6fbf49c883\") " pod="openstack/cinder-db-create-rq4q6" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.101203 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rq4q6" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.129930 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eec0612-69ee-4cf2-aa84-c08891b33e53-operator-scripts\") pod \"barbican-a4a5-account-create-update-dzsqd\" (UID: \"9eec0612-69ee-4cf2-aa84-c08891b33e53\") " pod="openstack/barbican-a4a5-account-create-update-dzsqd" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.129983 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94e73cf8-51af-4781-be3c-ef7490061629-operator-scripts\") pod \"cinder-18f0-account-create-update-v84s5\" (UID: \"94e73cf8-51af-4781-be3c-ef7490061629\") " pod="openstack/cinder-18f0-account-create-update-v84s5" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.130004 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722d139b-6c73-46cf-918b-6eec6bcee414-operator-scripts\") pod \"barbican-db-create-6bd6p\" (UID: \"722d139b-6c73-46cf-918b-6eec6bcee414\") " pod="openstack/barbican-db-create-6bd6p" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.130281 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5zs2\" (UniqueName: \"kubernetes.io/projected/94e73cf8-51af-4781-be3c-ef7490061629-kube-api-access-w5zs2\") pod \"cinder-18f0-account-create-update-v84s5\" (UID: \"94e73cf8-51af-4781-be3c-ef7490061629\") " pod="openstack/cinder-18f0-account-create-update-v84s5" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.130367 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbhlg\" (UniqueName: \"kubernetes.io/projected/9eec0612-69ee-4cf2-aa84-c08891b33e53-kube-api-access-mbhlg\") pod \"barbican-a4a5-account-create-update-dzsqd\" (UID: \"9eec0612-69ee-4cf2-aa84-c08891b33e53\") " pod="openstack/barbican-a4a5-account-create-update-dzsqd" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.130475 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj8lc\" (UniqueName: \"kubernetes.io/projected/722d139b-6c73-46cf-918b-6eec6bcee414-kube-api-access-xj8lc\") pod \"barbican-db-create-6bd6p\" (UID: \"722d139b-6c73-46cf-918b-6eec6bcee414\") " pod="openstack/barbican-db-create-6bd6p" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.130793 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722d139b-6c73-46cf-918b-6eec6bcee414-operator-scripts\") pod \"barbican-db-create-6bd6p\" (UID: \"722d139b-6c73-46cf-918b-6eec6bcee414\") " pod="openstack/barbican-db-create-6bd6p" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.131355 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94e73cf8-51af-4781-be3c-ef7490061629-operator-scripts\") pod \"cinder-18f0-account-create-update-v84s5\" (UID: \"94e73cf8-51af-4781-be3c-ef7490061629\") " pod="openstack/cinder-18f0-account-create-update-v84s5" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.164983 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5zs2\" (UniqueName: \"kubernetes.io/projected/94e73cf8-51af-4781-be3c-ef7490061629-kube-api-access-w5zs2\") pod \"cinder-18f0-account-create-update-v84s5\" (UID: \"94e73cf8-51af-4781-be3c-ef7490061629\") " pod="openstack/cinder-18f0-account-create-update-v84s5" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.173949 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj8lc\" (UniqueName: \"kubernetes.io/projected/722d139b-6c73-46cf-918b-6eec6bcee414-kube-api-access-xj8lc\") pod \"barbican-db-create-6bd6p\" (UID: \"722d139b-6c73-46cf-918b-6eec6bcee414\") " pod="openstack/barbican-db-create-6bd6p" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.194758 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6bd6p" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.207862 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-vgx5c"] Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.209169 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vgx5c" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.217039 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-5bzqr"] Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.218187 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5bzqr" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.259904 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-84xg2" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.260113 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.260273 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.260509 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.261853 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbhlg\" (UniqueName: \"kubernetes.io/projected/9eec0612-69ee-4cf2-aa84-c08891b33e53-kube-api-access-mbhlg\") pod \"barbican-a4a5-account-create-update-dzsqd\" (UID: \"9eec0612-69ee-4cf2-aa84-c08891b33e53\") " pod="openstack/barbican-a4a5-account-create-update-dzsqd" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.261898 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eec0612-69ee-4cf2-aa84-c08891b33e53-operator-scripts\") pod \"barbican-a4a5-account-create-update-dzsqd\" (UID: \"9eec0612-69ee-4cf2-aa84-c08891b33e53\") " pod="openstack/barbican-a4a5-account-create-update-dzsqd" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.262502 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eec0612-69ee-4cf2-aa84-c08891b33e53-operator-scripts\") pod \"barbican-a4a5-account-create-update-dzsqd\" (UID: \"9eec0612-69ee-4cf2-aa84-c08891b33e53\") " pod="openstack/barbican-a4a5-account-create-update-dzsqd" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.264085 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18f0-account-create-update-v84s5" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.273716 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5bzqr"] Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.286267 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vgx5c"] Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.312214 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbhlg\" (UniqueName: \"kubernetes.io/projected/9eec0612-69ee-4cf2-aa84-c08891b33e53-kube-api-access-mbhlg\") pod \"barbican-a4a5-account-create-update-dzsqd\" (UID: \"9eec0612-69ee-4cf2-aa84-c08891b33e53\") " pod="openstack/barbican-a4a5-account-create-update-dzsqd" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.353621 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a4a5-account-create-update-dzsqd" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.366517 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg7mn\" (UniqueName: \"kubernetes.io/projected/6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f-kube-api-access-lg7mn\") pod \"neutron-db-create-vgx5c\" (UID: \"6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f\") " pod="openstack/neutron-db-create-vgx5c" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.366591 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/259bfb44-0b45-476a-901a-e70c6b05a0e4-combined-ca-bundle\") pod \"keystone-db-sync-5bzqr\" (UID: \"259bfb44-0b45-476a-901a-e70c6b05a0e4\") " pod="openstack/keystone-db-sync-5bzqr" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.366612 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f-operator-scripts\") pod \"neutron-db-create-vgx5c\" (UID: \"6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f\") " pod="openstack/neutron-db-create-vgx5c" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.366652 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/259bfb44-0b45-476a-901a-e70c6b05a0e4-config-data\") pod \"keystone-db-sync-5bzqr\" (UID: \"259bfb44-0b45-476a-901a-e70c6b05a0e4\") " pod="openstack/keystone-db-sync-5bzqr" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.366701 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slwc5\" (UniqueName: \"kubernetes.io/projected/259bfb44-0b45-476a-901a-e70c6b05a0e4-kube-api-access-slwc5\") pod \"keystone-db-sync-5bzqr\" (UID: \"259bfb44-0b45-476a-901a-e70c6b05a0e4\") " pod="openstack/keystone-db-sync-5bzqr" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.468463 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/259bfb44-0b45-476a-901a-e70c6b05a0e4-combined-ca-bundle\") pod \"keystone-db-sync-5bzqr\" (UID: \"259bfb44-0b45-476a-901a-e70c6b05a0e4\") " pod="openstack/keystone-db-sync-5bzqr" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.468537 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f-operator-scripts\") pod \"neutron-db-create-vgx5c\" (UID: \"6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f\") " pod="openstack/neutron-db-create-vgx5c" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.468917 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/259bfb44-0b45-476a-901a-e70c6b05a0e4-config-data\") pod \"keystone-db-sync-5bzqr\" (UID: \"259bfb44-0b45-476a-901a-e70c6b05a0e4\") " pod="openstack/keystone-db-sync-5bzqr" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.469004 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slwc5\" (UniqueName: \"kubernetes.io/projected/259bfb44-0b45-476a-901a-e70c6b05a0e4-kube-api-access-slwc5\") pod \"keystone-db-sync-5bzqr\" (UID: \"259bfb44-0b45-476a-901a-e70c6b05a0e4\") " pod="openstack/keystone-db-sync-5bzqr" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.469068 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg7mn\" (UniqueName: \"kubernetes.io/projected/6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f-kube-api-access-lg7mn\") pod \"neutron-db-create-vgx5c\" (UID: \"6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f\") " pod="openstack/neutron-db-create-vgx5c" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.472078 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f-operator-scripts\") pod \"neutron-db-create-vgx5c\" (UID: \"6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f\") " pod="openstack/neutron-db-create-vgx5c" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.475968 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/259bfb44-0b45-476a-901a-e70c6b05a0e4-combined-ca-bundle\") pod \"keystone-db-sync-5bzqr\" (UID: \"259bfb44-0b45-476a-901a-e70c6b05a0e4\") " pod="openstack/keystone-db-sync-5bzqr" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.479404 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/259bfb44-0b45-476a-901a-e70c6b05a0e4-config-data\") pod \"keystone-db-sync-5bzqr\" (UID: \"259bfb44-0b45-476a-901a-e70c6b05a0e4\") " pod="openstack/keystone-db-sync-5bzqr" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.496724 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slwc5\" (UniqueName: \"kubernetes.io/projected/259bfb44-0b45-476a-901a-e70c6b05a0e4-kube-api-access-slwc5\") pod \"keystone-db-sync-5bzqr\" (UID: \"259bfb44-0b45-476a-901a-e70c6b05a0e4\") " pod="openstack/keystone-db-sync-5bzqr" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.508812 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg7mn\" (UniqueName: \"kubernetes.io/projected/6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f-kube-api-access-lg7mn\") pod \"neutron-db-create-vgx5c\" (UID: \"6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f\") " pod="openstack/neutron-db-create-vgx5c" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.521892 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2e5b-account-create-update-szp9d"] Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.523141 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2e5b-account-create-update-szp9d" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.528716 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2e5b-account-create-update-szp9d"] Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.531373 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.608202 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vgx5c" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.629391 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5bzqr" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.673980 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfmvv\" (UniqueName: \"kubernetes.io/projected/1a6452aa-f069-44f7-89ef-2766d721810d-kube-api-access-rfmvv\") pod \"neutron-2e5b-account-create-update-szp9d\" (UID: \"1a6452aa-f069-44f7-89ef-2766d721810d\") " pod="openstack/neutron-2e5b-account-create-update-szp9d" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.674572 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a6452aa-f069-44f7-89ef-2766d721810d-operator-scripts\") pod \"neutron-2e5b-account-create-update-szp9d\" (UID: \"1a6452aa-f069-44f7-89ef-2766d721810d\") " pod="openstack/neutron-2e5b-account-create-update-szp9d" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.762771 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rq4q6"] Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.776113 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfmvv\" (UniqueName: \"kubernetes.io/projected/1a6452aa-f069-44f7-89ef-2766d721810d-kube-api-access-rfmvv\") pod \"neutron-2e5b-account-create-update-szp9d\" (UID: \"1a6452aa-f069-44f7-89ef-2766d721810d\") " pod="openstack/neutron-2e5b-account-create-update-szp9d" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.776196 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a6452aa-f069-44f7-89ef-2766d721810d-operator-scripts\") pod \"neutron-2e5b-account-create-update-szp9d\" (UID: \"1a6452aa-f069-44f7-89ef-2766d721810d\") " pod="openstack/neutron-2e5b-account-create-update-szp9d" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.776905 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a6452aa-f069-44f7-89ef-2766d721810d-operator-scripts\") pod \"neutron-2e5b-account-create-update-szp9d\" (UID: \"1a6452aa-f069-44f7-89ef-2766d721810d\") " pod="openstack/neutron-2e5b-account-create-update-szp9d" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.821410 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfmvv\" (UniqueName: \"kubernetes.io/projected/1a6452aa-f069-44f7-89ef-2766d721810d-kube-api-access-rfmvv\") pod \"neutron-2e5b-account-create-update-szp9d\" (UID: \"1a6452aa-f069-44f7-89ef-2766d721810d\") " pod="openstack/neutron-2e5b-account-create-update-szp9d" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.861522 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2e5b-account-create-update-szp9d" Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.942495 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-18f0-account-create-update-v84s5"] Jan 27 14:04:18 crc kubenswrapper[4914]: I0127 14:04:18.962449 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6bd6p"] Jan 27 14:04:18 crc kubenswrapper[4914]: W0127 14:04:18.977022 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod722d139b_6c73_46cf_918b_6eec6bcee414.slice/crio-c80dc3d213cf6df0e30a7b80520279f61485b9b934b58a842f7979c3215d7fda WatchSource:0}: Error finding container c80dc3d213cf6df0e30a7b80520279f61485b9b934b58a842f7979c3215d7fda: Status 404 returned error can't find the container with id c80dc3d213cf6df0e30a7b80520279f61485b9b934b58a842f7979c3215d7fda Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.107195 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a4a5-account-create-update-dzsqd"] Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.298296 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vgx5c"] Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.307205 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5bzqr"] Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.513102 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2e5b-account-create-update-szp9d"] Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.565193 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vgx5c" event={"ID":"6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f","Type":"ContainerStarted","Data":"b9e380f0c1122da95477cea6a02007ada1ae2ee944f89485dcc31bb794cc19dd"} Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.569534 4914 generic.go:334] "Generic (PLEG): container finished" podID="94e73cf8-51af-4781-be3c-ef7490061629" containerID="d5a34caa13028e6e9bb316d66ab460490ec3ebcc040d1dae69e12cbb5adaedfd" exitCode=0 Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.569930 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18f0-account-create-update-v84s5" event={"ID":"94e73cf8-51af-4781-be3c-ef7490061629","Type":"ContainerDied","Data":"d5a34caa13028e6e9bb316d66ab460490ec3ebcc040d1dae69e12cbb5adaedfd"} Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.569985 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18f0-account-create-update-v84s5" event={"ID":"94e73cf8-51af-4781-be3c-ef7490061629","Type":"ContainerStarted","Data":"43cd4423d620145bef190b71a4e97b844272c5935f619de4a9acbc6fdbdf2c47"} Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.582191 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5bzqr" event={"ID":"259bfb44-0b45-476a-901a-e70c6b05a0e4","Type":"ContainerStarted","Data":"4dbad6e9b3b40ead54c7da7caadd016d0ec3c1d35d273182c6095fe813b34c46"} Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.596736 4914 generic.go:334] "Generic (PLEG): container finished" podID="593fca6b-0503-46d8-8b39-0b6fbf49c883" containerID="714faa35d5428cbad8657544f958e9111b3f43151b4fc41bafde03b3931c836d" exitCode=0 Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.597026 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rq4q6" event={"ID":"593fca6b-0503-46d8-8b39-0b6fbf49c883","Type":"ContainerDied","Data":"714faa35d5428cbad8657544f958e9111b3f43151b4fc41bafde03b3931c836d"} Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.597191 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rq4q6" event={"ID":"593fca6b-0503-46d8-8b39-0b6fbf49c883","Type":"ContainerStarted","Data":"7684198e701f08e022a9f43e5a47c5e2c4738d059daf736a36bb9f3aa90fb0d8"} Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.602103 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6bd6p" event={"ID":"722d139b-6c73-46cf-918b-6eec6bcee414","Type":"ContainerStarted","Data":"416641a2fe05bb1857e3128a835c1308f47844ec76d2531de4f6c1463e72486f"} Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.602151 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6bd6p" event={"ID":"722d139b-6c73-46cf-918b-6eec6bcee414","Type":"ContainerStarted","Data":"c80dc3d213cf6df0e30a7b80520279f61485b9b934b58a842f7979c3215d7fda"} Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.611561 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a4a5-account-create-update-dzsqd" event={"ID":"9eec0612-69ee-4cf2-aa84-c08891b33e53","Type":"ContainerStarted","Data":"718885fcd36db39c2d3c9dff1df739da69d1578a0bec41ca5812cae1e48d7f74"} Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.650955 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-a4a5-account-create-update-dzsqd" podStartSLOduration=2.6509277559999997 podStartE2EDuration="2.650927756s" podCreationTimestamp="2026-01-27 14:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:04:19.640385527 +0000 UTC m=+1217.952735602" watchObservedRunningTime="2026-01-27 14:04:19.650927756 +0000 UTC m=+1217.963277841" Jan 27 14:04:19 crc kubenswrapper[4914]: I0127 14:04:19.674936 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-6bd6p" podStartSLOduration=2.674917642 podStartE2EDuration="2.674917642s" podCreationTimestamp="2026-01-27 14:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:04:19.658498634 +0000 UTC m=+1217.970848719" watchObservedRunningTime="2026-01-27 14:04:19.674917642 +0000 UTC m=+1217.987267727" Jan 27 14:04:20 crc kubenswrapper[4914]: I0127 14:04:20.625330 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2e5b-account-create-update-szp9d" event={"ID":"1a6452aa-f069-44f7-89ef-2766d721810d","Type":"ContainerDied","Data":"6a963f30042c01e81036d5aa1e25496194c16fa3f6cf98ff318ece876145b2ed"} Jan 27 14:04:20 crc kubenswrapper[4914]: I0127 14:04:20.625292 4914 generic.go:334] "Generic (PLEG): container finished" podID="1a6452aa-f069-44f7-89ef-2766d721810d" containerID="6a963f30042c01e81036d5aa1e25496194c16fa3f6cf98ff318ece876145b2ed" exitCode=0 Jan 27 14:04:20 crc kubenswrapper[4914]: I0127 14:04:20.625779 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2e5b-account-create-update-szp9d" event={"ID":"1a6452aa-f069-44f7-89ef-2766d721810d","Type":"ContainerStarted","Data":"7fcf482f84992d89c856c5472e4db6893a9886e842c12382a4d49de62cb4eaa8"} Jan 27 14:04:20 crc kubenswrapper[4914]: I0127 14:04:20.628307 4914 generic.go:334] "Generic (PLEG): container finished" podID="722d139b-6c73-46cf-918b-6eec6bcee414" containerID="416641a2fe05bb1857e3128a835c1308f47844ec76d2531de4f6c1463e72486f" exitCode=0 Jan 27 14:04:20 crc kubenswrapper[4914]: I0127 14:04:20.628388 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6bd6p" event={"ID":"722d139b-6c73-46cf-918b-6eec6bcee414","Type":"ContainerDied","Data":"416641a2fe05bb1857e3128a835c1308f47844ec76d2531de4f6c1463e72486f"} Jan 27 14:04:20 crc kubenswrapper[4914]: I0127 14:04:20.631943 4914 generic.go:334] "Generic (PLEG): container finished" podID="9eec0612-69ee-4cf2-aa84-c08891b33e53" containerID="6bb3d784aed651d5b2a423ec61dd80da1e5a52eb1bbd6a800fc03faec3c1e21c" exitCode=0 Jan 27 14:04:20 crc kubenswrapper[4914]: I0127 14:04:20.632010 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a4a5-account-create-update-dzsqd" event={"ID":"9eec0612-69ee-4cf2-aa84-c08891b33e53","Type":"ContainerDied","Data":"6bb3d784aed651d5b2a423ec61dd80da1e5a52eb1bbd6a800fc03faec3c1e21c"} Jan 27 14:04:20 crc kubenswrapper[4914]: I0127 14:04:20.633676 4914 generic.go:334] "Generic (PLEG): container finished" podID="6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f" containerID="180c570bcb03998aab3c2ebc8726c7b0369cbc2f3a9166fc85060615ee89bda2" exitCode=0 Jan 27 14:04:20 crc kubenswrapper[4914]: I0127 14:04:20.633744 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vgx5c" event={"ID":"6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f","Type":"ContainerDied","Data":"180c570bcb03998aab3c2ebc8726c7b0369cbc2f3a9166fc85060615ee89bda2"} Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.174340 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18f0-account-create-update-v84s5" Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.181507 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rq4q6" Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.327337 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jr9p\" (UniqueName: \"kubernetes.io/projected/593fca6b-0503-46d8-8b39-0b6fbf49c883-kube-api-access-8jr9p\") pod \"593fca6b-0503-46d8-8b39-0b6fbf49c883\" (UID: \"593fca6b-0503-46d8-8b39-0b6fbf49c883\") " Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.327439 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94e73cf8-51af-4781-be3c-ef7490061629-operator-scripts\") pod \"94e73cf8-51af-4781-be3c-ef7490061629\" (UID: \"94e73cf8-51af-4781-be3c-ef7490061629\") " Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.327481 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5zs2\" (UniqueName: \"kubernetes.io/projected/94e73cf8-51af-4781-be3c-ef7490061629-kube-api-access-w5zs2\") pod \"94e73cf8-51af-4781-be3c-ef7490061629\" (UID: \"94e73cf8-51af-4781-be3c-ef7490061629\") " Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.327594 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/593fca6b-0503-46d8-8b39-0b6fbf49c883-operator-scripts\") pod \"593fca6b-0503-46d8-8b39-0b6fbf49c883\" (UID: \"593fca6b-0503-46d8-8b39-0b6fbf49c883\") " Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.328421 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/593fca6b-0503-46d8-8b39-0b6fbf49c883-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "593fca6b-0503-46d8-8b39-0b6fbf49c883" (UID: "593fca6b-0503-46d8-8b39-0b6fbf49c883"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.328758 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94e73cf8-51af-4781-be3c-ef7490061629-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94e73cf8-51af-4781-be3c-ef7490061629" (UID: "94e73cf8-51af-4781-be3c-ef7490061629"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.333110 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e73cf8-51af-4781-be3c-ef7490061629-kube-api-access-w5zs2" (OuterVolumeSpecName: "kube-api-access-w5zs2") pod "94e73cf8-51af-4781-be3c-ef7490061629" (UID: "94e73cf8-51af-4781-be3c-ef7490061629"). InnerVolumeSpecName "kube-api-access-w5zs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.334029 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593fca6b-0503-46d8-8b39-0b6fbf49c883-kube-api-access-8jr9p" (OuterVolumeSpecName: "kube-api-access-8jr9p") pod "593fca6b-0503-46d8-8b39-0b6fbf49c883" (UID: "593fca6b-0503-46d8-8b39-0b6fbf49c883"). InnerVolumeSpecName "kube-api-access-8jr9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.429880 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/593fca6b-0503-46d8-8b39-0b6fbf49c883-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.429935 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jr9p\" (UniqueName: \"kubernetes.io/projected/593fca6b-0503-46d8-8b39-0b6fbf49c883-kube-api-access-8jr9p\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.429949 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94e73cf8-51af-4781-be3c-ef7490061629-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.429965 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5zs2\" (UniqueName: \"kubernetes.io/projected/94e73cf8-51af-4781-be3c-ef7490061629-kube-api-access-w5zs2\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.642920 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18f0-account-create-update-v84s5" event={"ID":"94e73cf8-51af-4781-be3c-ef7490061629","Type":"ContainerDied","Data":"43cd4423d620145bef190b71a4e97b844272c5935f619de4a9acbc6fdbdf2c47"} Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.642956 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43cd4423d620145bef190b71a4e97b844272c5935f619de4a9acbc6fdbdf2c47" Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.642940 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18f0-account-create-update-v84s5" Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.646450 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rq4q6" event={"ID":"593fca6b-0503-46d8-8b39-0b6fbf49c883","Type":"ContainerDied","Data":"7684198e701f08e022a9f43e5a47c5e2c4738d059daf736a36bb9f3aa90fb0d8"} Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.646470 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7684198e701f08e022a9f43e5a47c5e2c4738d059daf736a36bb9f3aa90fb0d8" Jan 27 14:04:21 crc kubenswrapper[4914]: I0127 14:04:21.646636 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rq4q6" Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.138093 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.215879 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-dxsh7"] Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.216115 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" podUID="66a27657-35f7-4e4e-a754-cb7baffffa74" containerName="dnsmasq-dns" containerID="cri-o://df2de16efa930b432c6618f4a734d0fbbf7dd15875b8d3ffe2a95a8c250e03c7" gracePeriod=10 Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.683364 4914 generic.go:334] "Generic (PLEG): container finished" podID="66a27657-35f7-4e4e-a754-cb7baffffa74" containerID="df2de16efa930b432c6618f4a734d0fbbf7dd15875b8d3ffe2a95a8c250e03c7" exitCode=0 Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.683511 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" event={"ID":"66a27657-35f7-4e4e-a754-cb7baffffa74","Type":"ContainerDied","Data":"df2de16efa930b432c6618f4a734d0fbbf7dd15875b8d3ffe2a95a8c250e03c7"} Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.686957 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2e5b-account-create-update-szp9d" event={"ID":"1a6452aa-f069-44f7-89ef-2766d721810d","Type":"ContainerDied","Data":"7fcf482f84992d89c856c5472e4db6893a9886e842c12382a4d49de62cb4eaa8"} Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.687010 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fcf482f84992d89c856c5472e4db6893a9886e842c12382a4d49de62cb4eaa8" Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.689225 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6bd6p" event={"ID":"722d139b-6c73-46cf-918b-6eec6bcee414","Type":"ContainerDied","Data":"c80dc3d213cf6df0e30a7b80520279f61485b9b934b58a842f7979c3215d7fda"} Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.689250 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c80dc3d213cf6df0e30a7b80520279f61485b9b934b58a842f7979c3215d7fda" Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.692348 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a4a5-account-create-update-dzsqd" event={"ID":"9eec0612-69ee-4cf2-aa84-c08891b33e53","Type":"ContainerDied","Data":"718885fcd36db39c2d3c9dff1df739da69d1578a0bec41ca5812cae1e48d7f74"} Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.692386 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="718885fcd36db39c2d3c9dff1df739da69d1578a0bec41ca5812cae1e48d7f74" Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.695497 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vgx5c" event={"ID":"6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f","Type":"ContainerDied","Data":"b9e380f0c1122da95477cea6a02007ada1ae2ee944f89485dcc31bb794cc19dd"} Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.695540 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9e380f0c1122da95477cea6a02007ada1ae2ee944f89485dcc31bb794cc19dd" Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.760446 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a4a5-account-create-update-dzsqd" Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.889399 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eec0612-69ee-4cf2-aa84-c08891b33e53-operator-scripts\") pod \"9eec0612-69ee-4cf2-aa84-c08891b33e53\" (UID: \"9eec0612-69ee-4cf2-aa84-c08891b33e53\") " Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.889892 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbhlg\" (UniqueName: \"kubernetes.io/projected/9eec0612-69ee-4cf2-aa84-c08891b33e53-kube-api-access-mbhlg\") pod \"9eec0612-69ee-4cf2-aa84-c08891b33e53\" (UID: \"9eec0612-69ee-4cf2-aa84-c08891b33e53\") " Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.889942 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eec0612-69ee-4cf2-aa84-c08891b33e53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9eec0612-69ee-4cf2-aa84-c08891b33e53" (UID: "9eec0612-69ee-4cf2-aa84-c08891b33e53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.890327 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eec0612-69ee-4cf2-aa84-c08891b33e53-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.905283 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eec0612-69ee-4cf2-aa84-c08891b33e53-kube-api-access-mbhlg" (OuterVolumeSpecName: "kube-api-access-mbhlg") pod "9eec0612-69ee-4cf2-aa84-c08891b33e53" (UID: "9eec0612-69ee-4cf2-aa84-c08891b33e53"). InnerVolumeSpecName "kube-api-access-mbhlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.911797 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6bd6p" Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.927696 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vgx5c" Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.971268 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2e5b-account-create-update-szp9d" Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.991451 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj8lc\" (UniqueName: \"kubernetes.io/projected/722d139b-6c73-46cf-918b-6eec6bcee414-kube-api-access-xj8lc\") pod \"722d139b-6c73-46cf-918b-6eec6bcee414\" (UID: \"722d139b-6c73-46cf-918b-6eec6bcee414\") " Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.991520 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722d139b-6c73-46cf-918b-6eec6bcee414-operator-scripts\") pod \"722d139b-6c73-46cf-918b-6eec6bcee414\" (UID: \"722d139b-6c73-46cf-918b-6eec6bcee414\") " Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.991907 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbhlg\" (UniqueName: \"kubernetes.io/projected/9eec0612-69ee-4cf2-aa84-c08891b33e53-kube-api-access-mbhlg\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:24 crc kubenswrapper[4914]: I0127 14:04:24.992096 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/722d139b-6c73-46cf-918b-6eec6bcee414-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "722d139b-6c73-46cf-918b-6eec6bcee414" (UID: "722d139b-6c73-46cf-918b-6eec6bcee414"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.003981 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/722d139b-6c73-46cf-918b-6eec6bcee414-kube-api-access-xj8lc" (OuterVolumeSpecName: "kube-api-access-xj8lc") pod "722d139b-6c73-46cf-918b-6eec6bcee414" (UID: "722d139b-6c73-46cf-918b-6eec6bcee414"). InnerVolumeSpecName "kube-api-access-xj8lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.057750 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.092789 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a6452aa-f069-44f7-89ef-2766d721810d-operator-scripts\") pod \"1a6452aa-f069-44f7-89ef-2766d721810d\" (UID: \"1a6452aa-f069-44f7-89ef-2766d721810d\") " Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.092934 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg7mn\" (UniqueName: \"kubernetes.io/projected/6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f-kube-api-access-lg7mn\") pod \"6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f\" (UID: \"6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f\") " Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.092990 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfmvv\" (UniqueName: \"kubernetes.io/projected/1a6452aa-f069-44f7-89ef-2766d721810d-kube-api-access-rfmvv\") pod \"1a6452aa-f069-44f7-89ef-2766d721810d\" (UID: \"1a6452aa-f069-44f7-89ef-2766d721810d\") " Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.093021 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f-operator-scripts\") pod \"6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f\" (UID: \"6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f\") " Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.093358 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj8lc\" (UniqueName: \"kubernetes.io/projected/722d139b-6c73-46cf-918b-6eec6bcee414-kube-api-access-xj8lc\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.093370 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722d139b-6c73-46cf-918b-6eec6bcee414-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.093363 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6452aa-f069-44f7-89ef-2766d721810d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a6452aa-f069-44f7-89ef-2766d721810d" (UID: "1a6452aa-f069-44f7-89ef-2766d721810d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.093649 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f" (UID: "6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.097626 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f-kube-api-access-lg7mn" (OuterVolumeSpecName: "kube-api-access-lg7mn") pod "6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f" (UID: "6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f"). InnerVolumeSpecName "kube-api-access-lg7mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.097860 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a6452aa-f069-44f7-89ef-2766d721810d-kube-api-access-rfmvv" (OuterVolumeSpecName: "kube-api-access-rfmvv") pod "1a6452aa-f069-44f7-89ef-2766d721810d" (UID: "1a6452aa-f069-44f7-89ef-2766d721810d"). InnerVolumeSpecName "kube-api-access-rfmvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.194564 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f96ml\" (UniqueName: \"kubernetes.io/projected/66a27657-35f7-4e4e-a754-cb7baffffa74-kube-api-access-f96ml\") pod \"66a27657-35f7-4e4e-a754-cb7baffffa74\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.194668 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-ovsdbserver-sb\") pod \"66a27657-35f7-4e4e-a754-cb7baffffa74\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.194694 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-config\") pod \"66a27657-35f7-4e4e-a754-cb7baffffa74\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.194718 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-dns-svc\") pod \"66a27657-35f7-4e4e-a754-cb7baffffa74\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.194754 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-ovsdbserver-nb\") pod \"66a27657-35f7-4e4e-a754-cb7baffffa74\" (UID: \"66a27657-35f7-4e4e-a754-cb7baffffa74\") " Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.195196 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.195211 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a6452aa-f069-44f7-89ef-2766d721810d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.195223 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg7mn\" (UniqueName: \"kubernetes.io/projected/6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f-kube-api-access-lg7mn\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.195236 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfmvv\" (UniqueName: \"kubernetes.io/projected/1a6452aa-f069-44f7-89ef-2766d721810d-kube-api-access-rfmvv\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.198352 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a27657-35f7-4e4e-a754-cb7baffffa74-kube-api-access-f96ml" (OuterVolumeSpecName: "kube-api-access-f96ml") pod "66a27657-35f7-4e4e-a754-cb7baffffa74" (UID: "66a27657-35f7-4e4e-a754-cb7baffffa74"). InnerVolumeSpecName "kube-api-access-f96ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.229112 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "66a27657-35f7-4e4e-a754-cb7baffffa74" (UID: "66a27657-35f7-4e4e-a754-cb7baffffa74"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.232394 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "66a27657-35f7-4e4e-a754-cb7baffffa74" (UID: "66a27657-35f7-4e4e-a754-cb7baffffa74"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.236412 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-config" (OuterVolumeSpecName: "config") pod "66a27657-35f7-4e4e-a754-cb7baffffa74" (UID: "66a27657-35f7-4e4e-a754-cb7baffffa74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.239340 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "66a27657-35f7-4e4e-a754-cb7baffffa74" (UID: "66a27657-35f7-4e4e-a754-cb7baffffa74"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.296662 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.296707 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f96ml\" (UniqueName: \"kubernetes.io/projected/66a27657-35f7-4e4e-a754-cb7baffffa74-kube-api-access-f96ml\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.296717 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.296725 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.296734 4914 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66a27657-35f7-4e4e-a754-cb7baffffa74-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.703971 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5bzqr" event={"ID":"259bfb44-0b45-476a-901a-e70c6b05a0e4","Type":"ContainerStarted","Data":"e43478a5fba67162a32a4753bf8dd61deb2192806ae6302438aff3f8bfde3711"} Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.706345 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2e5b-account-create-update-szp9d" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.714172 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a4a5-account-create-update-dzsqd" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.714947 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vgx5c" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.715078 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6bd6p" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.716501 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" event={"ID":"66a27657-35f7-4e4e-a754-cb7baffffa74","Type":"ContainerDied","Data":"1c1a6a674c0cf82a8838f94c6e723d223407ffdf20303a211f9dd71dc9b05197"} Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.716563 4914 scope.go:117] "RemoveContainer" containerID="df2de16efa930b432c6618f4a734d0fbbf7dd15875b8d3ffe2a95a8c250e03c7" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.716576 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.730849 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-5bzqr" podStartSLOduration=2.283490779 podStartE2EDuration="7.730814817s" podCreationTimestamp="2026-01-27 14:04:18 +0000 UTC" firstStartedPulling="2026-01-27 14:04:19.303323717 +0000 UTC m=+1217.615673802" lastFinishedPulling="2026-01-27 14:04:24.750647755 +0000 UTC m=+1223.062997840" observedRunningTime="2026-01-27 14:04:25.730740355 +0000 UTC m=+1224.043090450" watchObservedRunningTime="2026-01-27 14:04:25.730814817 +0000 UTC m=+1224.043164902" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.736423 4914 scope.go:117] "RemoveContainer" containerID="caf4749ab42d2a3e53332b78e653e5bab8ed25a3d187aa061cf8b84c537f55cc" Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.802942 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-dxsh7"] Jan 27 14:04:25 crc kubenswrapper[4914]: I0127 14:04:25.819698 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-dxsh7"] Jan 27 14:04:26 crc kubenswrapper[4914]: I0127 14:04:26.307550 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a27657-35f7-4e4e-a754-cb7baffffa74" path="/var/lib/kubelet/pods/66a27657-35f7-4e4e-a754-cb7baffffa74/volumes" Jan 27 14:04:28 crc kubenswrapper[4914]: I0127 14:04:28.733031 4914 generic.go:334] "Generic (PLEG): container finished" podID="259bfb44-0b45-476a-901a-e70c6b05a0e4" containerID="e43478a5fba67162a32a4753bf8dd61deb2192806ae6302438aff3f8bfde3711" exitCode=0 Jan 27 14:04:28 crc kubenswrapper[4914]: I0127 14:04:28.733123 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5bzqr" event={"ID":"259bfb44-0b45-476a-901a-e70c6b05a0e4","Type":"ContainerDied","Data":"e43478a5fba67162a32a4753bf8dd61deb2192806ae6302438aff3f8bfde3711"} Jan 27 14:04:29 crc kubenswrapper[4914]: I0127 14:04:29.853727 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cb545bd4c-dxsh7" podUID="66a27657-35f7-4e4e-a754-cb7baffffa74" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Jan 27 14:04:30 crc kubenswrapper[4914]: I0127 14:04:30.063402 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5bzqr" Jan 27 14:04:30 crc kubenswrapper[4914]: I0127 14:04:30.181434 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/259bfb44-0b45-476a-901a-e70c6b05a0e4-config-data\") pod \"259bfb44-0b45-476a-901a-e70c6b05a0e4\" (UID: \"259bfb44-0b45-476a-901a-e70c6b05a0e4\") " Jan 27 14:04:30 crc kubenswrapper[4914]: I0127 14:04:30.181488 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/259bfb44-0b45-476a-901a-e70c6b05a0e4-combined-ca-bundle\") pod \"259bfb44-0b45-476a-901a-e70c6b05a0e4\" (UID: \"259bfb44-0b45-476a-901a-e70c6b05a0e4\") " Jan 27 14:04:30 crc kubenswrapper[4914]: I0127 14:04:30.181557 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slwc5\" (UniqueName: \"kubernetes.io/projected/259bfb44-0b45-476a-901a-e70c6b05a0e4-kube-api-access-slwc5\") pod \"259bfb44-0b45-476a-901a-e70c6b05a0e4\" (UID: \"259bfb44-0b45-476a-901a-e70c6b05a0e4\") " Jan 27 14:04:30 crc kubenswrapper[4914]: I0127 14:04:30.187341 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/259bfb44-0b45-476a-901a-e70c6b05a0e4-kube-api-access-slwc5" (OuterVolumeSpecName: "kube-api-access-slwc5") pod "259bfb44-0b45-476a-901a-e70c6b05a0e4" (UID: "259bfb44-0b45-476a-901a-e70c6b05a0e4"). InnerVolumeSpecName "kube-api-access-slwc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:30 crc kubenswrapper[4914]: I0127 14:04:30.210273 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/259bfb44-0b45-476a-901a-e70c6b05a0e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "259bfb44-0b45-476a-901a-e70c6b05a0e4" (UID: "259bfb44-0b45-476a-901a-e70c6b05a0e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:30 crc kubenswrapper[4914]: I0127 14:04:30.234051 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/259bfb44-0b45-476a-901a-e70c6b05a0e4-config-data" (OuterVolumeSpecName: "config-data") pod "259bfb44-0b45-476a-901a-e70c6b05a0e4" (UID: "259bfb44-0b45-476a-901a-e70c6b05a0e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:30 crc kubenswrapper[4914]: I0127 14:04:30.283328 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/259bfb44-0b45-476a-901a-e70c6b05a0e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:30 crc kubenswrapper[4914]: I0127 14:04:30.283572 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/259bfb44-0b45-476a-901a-e70c6b05a0e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:30 crc kubenswrapper[4914]: I0127 14:04:30.283678 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slwc5\" (UniqueName: \"kubernetes.io/projected/259bfb44-0b45-476a-901a-e70c6b05a0e4-kube-api-access-slwc5\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:30 crc kubenswrapper[4914]: I0127 14:04:30.750600 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5bzqr" event={"ID":"259bfb44-0b45-476a-901a-e70c6b05a0e4","Type":"ContainerDied","Data":"4dbad6e9b3b40ead54c7da7caadd016d0ec3c1d35d273182c6095fe813b34c46"} Jan 27 14:04:30 crc kubenswrapper[4914]: I0127 14:04:30.750639 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dbad6e9b3b40ead54c7da7caadd016d0ec3c1d35d273182c6095fe813b34c46" Jan 27 14:04:30 crc kubenswrapper[4914]: I0127 14:04:30.750693 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5bzqr" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.029719 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-dvj2m"] Jan 27 14:04:31 crc kubenswrapper[4914]: E0127 14:04:31.030226 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a6452aa-f069-44f7-89ef-2766d721810d" containerName="mariadb-account-create-update" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030245 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a6452aa-f069-44f7-89ef-2766d721810d" containerName="mariadb-account-create-update" Jan 27 14:04:31 crc kubenswrapper[4914]: E0127 14:04:31.030263 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e73cf8-51af-4781-be3c-ef7490061629" containerName="mariadb-account-create-update" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030273 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e73cf8-51af-4781-be3c-ef7490061629" containerName="mariadb-account-create-update" Jan 27 14:04:31 crc kubenswrapper[4914]: E0127 14:04:31.030294 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a27657-35f7-4e4e-a754-cb7baffffa74" containerName="dnsmasq-dns" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030302 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a27657-35f7-4e4e-a754-cb7baffffa74" containerName="dnsmasq-dns" Jan 27 14:04:31 crc kubenswrapper[4914]: E0127 14:04:31.030315 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f" containerName="mariadb-database-create" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030323 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f" containerName="mariadb-database-create" Jan 27 14:04:31 crc kubenswrapper[4914]: E0127 14:04:31.030342 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593fca6b-0503-46d8-8b39-0b6fbf49c883" containerName="mariadb-database-create" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030350 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="593fca6b-0503-46d8-8b39-0b6fbf49c883" containerName="mariadb-database-create" Jan 27 14:04:31 crc kubenswrapper[4914]: E0127 14:04:31.030367 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eec0612-69ee-4cf2-aa84-c08891b33e53" containerName="mariadb-account-create-update" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030376 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eec0612-69ee-4cf2-aa84-c08891b33e53" containerName="mariadb-account-create-update" Jan 27 14:04:31 crc kubenswrapper[4914]: E0127 14:04:31.030391 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a27657-35f7-4e4e-a754-cb7baffffa74" containerName="init" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030398 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a27657-35f7-4e4e-a754-cb7baffffa74" containerName="init" Jan 27 14:04:31 crc kubenswrapper[4914]: E0127 14:04:31.030417 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="722d139b-6c73-46cf-918b-6eec6bcee414" containerName="mariadb-database-create" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030426 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="722d139b-6c73-46cf-918b-6eec6bcee414" containerName="mariadb-database-create" Jan 27 14:04:31 crc kubenswrapper[4914]: E0127 14:04:31.030434 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="259bfb44-0b45-476a-901a-e70c6b05a0e4" containerName="keystone-db-sync" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030442 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="259bfb44-0b45-476a-901a-e70c6b05a0e4" containerName="keystone-db-sync" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030664 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="722d139b-6c73-46cf-918b-6eec6bcee414" containerName="mariadb-database-create" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030683 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f" containerName="mariadb-database-create" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030698 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="259bfb44-0b45-476a-901a-e70c6b05a0e4" containerName="keystone-db-sync" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030708 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a6452aa-f069-44f7-89ef-2766d721810d" containerName="mariadb-account-create-update" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030722 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="94e73cf8-51af-4781-be3c-ef7490061629" containerName="mariadb-account-create-update" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030737 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eec0612-69ee-4cf2-aa84-c08891b33e53" containerName="mariadb-account-create-update" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030749 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a27657-35f7-4e4e-a754-cb7baffffa74" containerName="dnsmasq-dns" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.030784 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="593fca6b-0503-46d8-8b39-0b6fbf49c883" containerName="mariadb-database-create" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.037200 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.044275 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-d266k"] Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.045323 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.051261 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.051875 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.051955 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.052224 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-84xg2" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.052470 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.058485 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d266k"] Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.071585 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-dvj2m"] Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.197459 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.197667 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-combined-ca-bundle\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.197710 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-scripts\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.197736 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxjzc\" (UniqueName: \"kubernetes.io/projected/e855aa58-430b-40b4-a5f1-d8abca86976f-kube-api-access-dxjzc\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.197755 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-config-data\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.197789 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-config\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.197816 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-credential-keys\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.197864 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v6mz\" (UniqueName: \"kubernetes.io/projected/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-kube-api-access-4v6mz\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.197889 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.197912 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.197931 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.197956 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-fernet-keys\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.226277 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-59bf577cbf-wxzn9"] Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.227750 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.241746 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.241993 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.242174 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.242331 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-bqh7w" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.279783 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59bf577cbf-wxzn9"] Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299357 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-scripts\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299403 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxjzc\" (UniqueName: \"kubernetes.io/projected/e855aa58-430b-40b4-a5f1-d8abca86976f-kube-api-access-dxjzc\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299423 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-config-data\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299462 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-config\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299489 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-config-data\") pod \"horizon-59bf577cbf-wxzn9\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299510 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-credential-keys\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299526 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v6mz\" (UniqueName: \"kubernetes.io/projected/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-kube-api-access-4v6mz\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299548 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299566 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299586 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299605 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-fernet-keys\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299638 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-logs\") pod \"horizon-59bf577cbf-wxzn9\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299661 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpgnt\" (UniqueName: \"kubernetes.io/projected/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-kube-api-access-hpgnt\") pod \"horizon-59bf577cbf-wxzn9\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299680 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-horizon-secret-key\") pod \"horizon-59bf577cbf-wxzn9\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299700 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-scripts\") pod \"horizon-59bf577cbf-wxzn9\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299723 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.299739 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-combined-ca-bundle\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.302601 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.304597 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-config\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.305254 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.305737 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.307822 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.324886 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-config-data\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.324956 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.328632 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.331343 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.331509 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.339498 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-combined-ca-bundle\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.340174 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-scripts\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.342495 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-fernet-keys\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.347435 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.355499 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxjzc\" (UniqueName: \"kubernetes.io/projected/e855aa58-430b-40b4-a5f1-d8abca86976f-kube-api-access-dxjzc\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.363971 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v6mz\" (UniqueName: \"kubernetes.io/projected/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-kube-api-access-4v6mz\") pod \"dnsmasq-dns-54b4bb76d5-dvj2m\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.367899 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.368401 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-credential-keys\") pod \"keystone-bootstrap-d266k\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.402761 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.402846 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrldw\" (UniqueName: \"kubernetes.io/projected/ec53709e-df2b-4fc9-b9ac-6e144a262455-kube-api-access-mrldw\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.402890 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.402921 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-logs\") pod \"horizon-59bf577cbf-wxzn9\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.402950 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpgnt\" (UniqueName: \"kubernetes.io/projected/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-kube-api-access-hpgnt\") pod \"horizon-59bf577cbf-wxzn9\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.402975 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec53709e-df2b-4fc9-b9ac-6e144a262455-run-httpd\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.403001 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-horizon-secret-key\") pod \"horizon-59bf577cbf-wxzn9\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.403021 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec53709e-df2b-4fc9-b9ac-6e144a262455-log-httpd\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.403051 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-scripts\") pod \"horizon-59bf577cbf-wxzn9\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.403079 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-config-data\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.403142 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-config-data\") pod \"horizon-59bf577cbf-wxzn9\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.403170 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-scripts\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.403645 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-logs\") pod \"horizon-59bf577cbf-wxzn9\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.407128 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-scripts\") pod \"horizon-59bf577cbf-wxzn9\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.408303 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-config-data\") pod \"horizon-59bf577cbf-wxzn9\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.418447 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-horizon-secret-key\") pod \"horizon-59bf577cbf-wxzn9\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.504925 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.504976 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrldw\" (UniqueName: \"kubernetes.io/projected/ec53709e-df2b-4fc9-b9ac-6e144a262455-kube-api-access-mrldw\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.505009 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.505043 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec53709e-df2b-4fc9-b9ac-6e144a262455-run-httpd\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.505058 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec53709e-df2b-4fc9-b9ac-6e144a262455-log-httpd\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.505088 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-config-data\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.505151 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-scripts\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.508344 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpgnt\" (UniqueName: \"kubernetes.io/projected/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-kube-api-access-hpgnt\") pod \"horizon-59bf577cbf-wxzn9\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.510708 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec53709e-df2b-4fc9-b9ac-6e144a262455-log-httpd\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.510950 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.510964 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec53709e-df2b-4fc9-b9ac-6e144a262455-run-httpd\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.516362 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-config-data\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.522578 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.527544 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-scripts\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.531209 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-dvj2m"] Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.553413 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.565839 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2h5wx"] Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.573583 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.594342 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4kzf2" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.594536 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.594714 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.606503 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrldw\" (UniqueName: \"kubernetes.io/projected/ec53709e-df2b-4fc9-b9ac-6e144a262455-kube-api-access-mrldw\") pod \"ceilometer-0\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.608569 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.609061 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2h5wx"] Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.639379 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-cwqrf"] Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.641351 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cwqrf" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.664432 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-8v69b" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.665594 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.671691 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.737547 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgqr2\" (UniqueName: \"kubernetes.io/projected/07d55233-43ac-42a0-b604-e38f7bafa346-kube-api-access-vgqr2\") pod \"barbican-db-sync-cwqrf\" (UID: \"07d55233-43ac-42a0-b604-e38f7bafa346\") " pod="openstack/barbican-db-sync-cwqrf" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.737626 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-config-data\") pod \"placement-db-sync-2h5wx\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.737709 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d55233-43ac-42a0-b604-e38f7bafa346-combined-ca-bundle\") pod \"barbican-db-sync-cwqrf\" (UID: \"07d55233-43ac-42a0-b604-e38f7bafa346\") " pod="openstack/barbican-db-sync-cwqrf" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.737768 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-combined-ca-bundle\") pod \"placement-db-sync-2h5wx\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.737885 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/07d55233-43ac-42a0-b604-e38f7bafa346-db-sync-config-data\") pod \"barbican-db-sync-cwqrf\" (UID: \"07d55233-43ac-42a0-b604-e38f7bafa346\") " pod="openstack/barbican-db-sync-cwqrf" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.737941 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505474ad-b983-4001-b8b6-f55b1d077e08-logs\") pod \"placement-db-sync-2h5wx\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.737978 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-scripts\") pod \"placement-db-sync-2h5wx\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.737999 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk7xx\" (UniqueName: \"kubernetes.io/projected/505474ad-b983-4001-b8b6-f55b1d077e08-kube-api-access-qk7xx\") pod \"placement-db-sync-2h5wx\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.739008 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-fbv2k"] Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.766117 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.795379 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-58d6b45967-kdfwt"] Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.874082 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-config\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.874166 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.874188 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgqr2\" (UniqueName: \"kubernetes.io/projected/07d55233-43ac-42a0-b604-e38f7bafa346-kube-api-access-vgqr2\") pod \"barbican-db-sync-cwqrf\" (UID: \"07d55233-43ac-42a0-b604-e38f7bafa346\") " pod="openstack/barbican-db-sync-cwqrf" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.874214 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-config-data\") pod \"placement-db-sync-2h5wx\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.874253 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dzdf\" (UniqueName: \"kubernetes.io/projected/972edb24-1cfb-4529-bc89-2bb9a89c5579-kube-api-access-4dzdf\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.874272 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d55233-43ac-42a0-b604-e38f7bafa346-combined-ca-bundle\") pod \"barbican-db-sync-cwqrf\" (UID: \"07d55233-43ac-42a0-b604-e38f7bafa346\") " pod="openstack/barbican-db-sync-cwqrf" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.874301 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-combined-ca-bundle\") pod \"placement-db-sync-2h5wx\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.874332 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.874354 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/07d55233-43ac-42a0-b604-e38f7bafa346-db-sync-config-data\") pod \"barbican-db-sync-cwqrf\" (UID: \"07d55233-43ac-42a0-b604-e38f7bafa346\") " pod="openstack/barbican-db-sync-cwqrf" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.874380 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505474ad-b983-4001-b8b6-f55b1d077e08-logs\") pod \"placement-db-sync-2h5wx\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.874400 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.874422 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-scripts\") pod \"placement-db-sync-2h5wx\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.874442 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.874468 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk7xx\" (UniqueName: \"kubernetes.io/projected/505474ad-b983-4001-b8b6-f55b1d077e08-kube-api-access-qk7xx\") pod \"placement-db-sync-2h5wx\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.876640 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.885134 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505474ad-b983-4001-b8b6-f55b1d077e08-logs\") pod \"placement-db-sync-2h5wx\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.898255 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/07d55233-43ac-42a0-b604-e38f7bafa346-db-sync-config-data\") pod \"barbican-db-sync-cwqrf\" (UID: \"07d55233-43ac-42a0-b604-e38f7bafa346\") " pod="openstack/barbican-db-sync-cwqrf" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.914424 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cwqrf"] Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.936991 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-combined-ca-bundle\") pod \"placement-db-sync-2h5wx\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.937324 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-scripts\") pod \"placement-db-sync-2h5wx\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.952189 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-config-data\") pod \"placement-db-sync-2h5wx\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.952629 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d55233-43ac-42a0-b604-e38f7bafa346-combined-ca-bundle\") pod \"barbican-db-sync-cwqrf\" (UID: \"07d55233-43ac-42a0-b604-e38f7bafa346\") " pod="openstack/barbican-db-sync-cwqrf" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.953406 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk7xx\" (UniqueName: \"kubernetes.io/projected/505474ad-b983-4001-b8b6-f55b1d077e08-kube-api-access-qk7xx\") pod \"placement-db-sync-2h5wx\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.955236 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgqr2\" (UniqueName: \"kubernetes.io/projected/07d55233-43ac-42a0-b604-e38f7bafa346-kube-api-access-vgqr2\") pod \"barbican-db-sync-cwqrf\" (UID: \"07d55233-43ac-42a0-b604-e38f7bafa346\") " pod="openstack/barbican-db-sync-cwqrf" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.979677 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dzdf\" (UniqueName: \"kubernetes.io/projected/972edb24-1cfb-4529-bc89-2bb9a89c5579-kube-api-access-4dzdf\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.979771 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.979812 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.979845 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.979886 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-config\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.979933 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.980336 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-t8d7p"] Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.980783 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.989754 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.990344 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-config\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.991181 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-t8d7p" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.993627 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 14:04:31 crc kubenswrapper[4914]: I0127 14:04:31.995122 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.000381 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.001539 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-mr4pf" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.001778 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.022899 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-t8d7p"] Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.034712 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dzdf\" (UniqueName: \"kubernetes.io/projected/972edb24-1cfb-4529-bc89-2bb9a89c5579-kube-api-access-4dzdf\") pod \"dnsmasq-dns-5dc4fcdbc-fbv2k\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.046928 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-fbv2k"] Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.047349 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cwqrf" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.065496 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58d6b45967-kdfwt"] Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.085561 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghtwg\" (UniqueName: \"kubernetes.io/projected/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-kube-api-access-ghtwg\") pod \"horizon-58d6b45967-kdfwt\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.085638 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ftd6\" (UniqueName: \"kubernetes.io/projected/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-kube-api-access-4ftd6\") pod \"neutron-db-sync-t8d7p\" (UID: \"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1\") " pod="openstack/neutron-db-sync-t8d7p" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.085675 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-combined-ca-bundle\") pod \"neutron-db-sync-t8d7p\" (UID: \"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1\") " pod="openstack/neutron-db-sync-t8d7p" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.085735 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-scripts\") pod \"horizon-58d6b45967-kdfwt\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.085774 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-config\") pod \"neutron-db-sync-t8d7p\" (UID: \"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1\") " pod="openstack/neutron-db-sync-t8d7p" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.085795 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-horizon-secret-key\") pod \"horizon-58d6b45967-kdfwt\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.085886 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-config-data\") pod \"horizon-58d6b45967-kdfwt\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.085928 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-logs\") pod \"horizon-58d6b45967-kdfwt\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.086232 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.089006 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.091667 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.092088 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.092200 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.092373 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-h2zsk" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.136011 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.170523 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.172434 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.178236 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.178417 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.180523 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.187301 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-combined-ca-bundle\") pod \"neutron-db-sync-t8d7p\" (UID: \"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1\") " pod="openstack/neutron-db-sync-t8d7p" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.187361 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-scripts\") pod \"horizon-58d6b45967-kdfwt\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.187393 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-config\") pod \"neutron-db-sync-t8d7p\" (UID: \"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1\") " pod="openstack/neutron-db-sync-t8d7p" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.187409 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-horizon-secret-key\") pod \"horizon-58d6b45967-kdfwt\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.187445 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-config-data\") pod \"horizon-58d6b45967-kdfwt\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.188760 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-logs\") pod \"horizon-58d6b45967-kdfwt\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.188809 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghtwg\" (UniqueName: \"kubernetes.io/projected/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-kube-api-access-ghtwg\") pod \"horizon-58d6b45967-kdfwt\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.188864 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ftd6\" (UniqueName: \"kubernetes.io/projected/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-kube-api-access-4ftd6\") pod \"neutron-db-sync-t8d7p\" (UID: \"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1\") " pod="openstack/neutron-db-sync-t8d7p" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.191063 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-logs\") pod \"horizon-58d6b45967-kdfwt\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.191930 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-scripts\") pod \"horizon-58d6b45967-kdfwt\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.192462 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-config-data\") pod \"horizon-58d6b45967-kdfwt\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.194207 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-horizon-secret-key\") pod \"horizon-58d6b45967-kdfwt\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.194261 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-combined-ca-bundle\") pod \"neutron-db-sync-t8d7p\" (UID: \"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1\") " pod="openstack/neutron-db-sync-t8d7p" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.201394 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-config\") pod \"neutron-db-sync-t8d7p\" (UID: \"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1\") " pod="openstack/neutron-db-sync-t8d7p" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.210457 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghtwg\" (UniqueName: \"kubernetes.io/projected/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-kube-api-access-ghtwg\") pod \"horizon-58d6b45967-kdfwt\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.212089 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ftd6\" (UniqueName: \"kubernetes.io/projected/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-kube-api-access-4ftd6\") pod \"neutron-db-sync-t8d7p\" (UID: \"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1\") " pod="openstack/neutron-db-sync-t8d7p" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.228877 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2h5wx" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.253397 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.271936 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.293617 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87mnd\" (UniqueName: \"kubernetes.io/projected/231e853e-1db9-4828-8494-424bedf8b7bf-kube-api-access-87mnd\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.293699 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.293739 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28v94\" (UniqueName: \"kubernetes.io/projected/516e350a-2ee5-47ab-a4de-395688e55039-kube-api-access-28v94\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.293776 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/516e350a-2ee5-47ab-a4de-395688e55039-logs\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.293802 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.301962 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.302221 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/231e853e-1db9-4828-8494-424bedf8b7bf-logs\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.302376 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/231e853e-1db9-4828-8494-424bedf8b7bf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.302423 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-config-data\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.310770 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/516e350a-2ee5-47ab-a4de-395688e55039-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.311199 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.311292 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.311322 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.311355 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.311379 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-scripts\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:32 crc kubenswrapper[4914]: I0127 14:04:32.311625 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.369210 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-dvj2m"] Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.410718 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-t8d7p" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.412817 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.412878 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.412944 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.412968 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-scripts\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.412987 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.413042 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87mnd\" (UniqueName: \"kubernetes.io/projected/231e853e-1db9-4828-8494-424bedf8b7bf-kube-api-access-87mnd\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.413065 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.413108 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28v94\" (UniqueName: \"kubernetes.io/projected/516e350a-2ee5-47ab-a4de-395688e55039-kube-api-access-28v94\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.413143 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/516e350a-2ee5-47ab-a4de-395688e55039-logs\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.413176 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.413202 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.413231 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/231e853e-1db9-4828-8494-424bedf8b7bf-logs\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.413301 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/231e853e-1db9-4828-8494-424bedf8b7bf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.413380 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-config-data\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.413446 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/516e350a-2ee5-47ab-a4de-395688e55039-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.413490 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.413893 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.423216 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/231e853e-1db9-4828-8494-424bedf8b7bf-logs\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.423488 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.423616 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/516e350a-2ee5-47ab-a4de-395688e55039-logs\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.425587 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/231e853e-1db9-4828-8494-424bedf8b7bf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.426641 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/516e350a-2ee5-47ab-a4de-395688e55039-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.427778 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.465998 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.468759 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.474024 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-scripts\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.476641 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.483998 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.492246 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.492780 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-config-data\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.493629 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28v94\" (UniqueName: \"kubernetes.io/projected/516e350a-2ee5-47ab-a4de-395688e55039-kube-api-access-28v94\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.493925 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87mnd\" (UniqueName: \"kubernetes.io/projected/231e853e-1db9-4828-8494-424bedf8b7bf-kube-api-access-87mnd\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.498499 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.511270 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.548960 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.655646 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.666209 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d266k"] Jan 27 14:04:33 crc kubenswrapper[4914]: W0127 14:04:32.688687 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode855aa58_430b_40b4_a5f1_d8abca86976f.slice/crio-32db439bdb64d0294c9d80aa2b5d1ab91ef3686b25ee855aa5ea2cf56f7cedf8 WatchSource:0}: Error finding container 32db439bdb64d0294c9d80aa2b5d1ab91ef3686b25ee855aa5ea2cf56f7cedf8: Status 404 returned error can't find the container with id 32db439bdb64d0294c9d80aa2b5d1ab91ef3686b25ee855aa5ea2cf56f7cedf8 Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.751757 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.894620 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec53709e-df2b-4fc9-b9ac-6e144a262455","Type":"ContainerStarted","Data":"9cfb0d219fc3bc6aebcdb73c7b314e6b7d5698e507f87519359a9c4607cd8d0d"} Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.895663 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d266k" event={"ID":"e855aa58-430b-40b4-a5f1-d8abca86976f","Type":"ContainerStarted","Data":"32db439bdb64d0294c9d80aa2b5d1ab91ef3686b25ee855aa5ea2cf56f7cedf8"} Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.900307 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" event={"ID":"09dfaf0f-e3cb-43e7-8e82-e060d5e38767","Type":"ContainerStarted","Data":"4e0ba9ffd674d35b874f0b5ef369f082eaa21a8f751875c0fc8029009ff08836"} Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:32.900331 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" event={"ID":"09dfaf0f-e3cb-43e7-8e82-e060d5e38767","Type":"ContainerStarted","Data":"19325beac5fbef5eed8f072b43ef8701607a837189fee51ac1a6a6a42157c012"} Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.432085 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.494431 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58d6b45967-kdfwt"] Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.506497 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59bf577cbf-wxzn9"] Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.531022 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cwqrf"] Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.558644 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-596b6b8cf5-tz8m9"] Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.560476 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.577170 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.600665 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-596b6b8cf5-tz8m9"] Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.651922 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b5tl\" (UniqueName: \"kubernetes.io/projected/047e2f42-86db-4fb3-ba7d-6927181dc49b-kube-api-access-7b5tl\") pod \"horizon-596b6b8cf5-tz8m9\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.652701 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/047e2f42-86db-4fb3-ba7d-6927181dc49b-config-data\") pod \"horizon-596b6b8cf5-tz8m9\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.652789 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/047e2f42-86db-4fb3-ba7d-6927181dc49b-horizon-secret-key\") pod \"horizon-596b6b8cf5-tz8m9\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.652902 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/047e2f42-86db-4fb3-ba7d-6927181dc49b-scripts\") pod \"horizon-596b6b8cf5-tz8m9\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.653044 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/047e2f42-86db-4fb3-ba7d-6927181dc49b-logs\") pod \"horizon-596b6b8cf5-tz8m9\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.653207 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2h5wx"] Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.735032 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.754482 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b5tl\" (UniqueName: \"kubernetes.io/projected/047e2f42-86db-4fb3-ba7d-6927181dc49b-kube-api-access-7b5tl\") pod \"horizon-596b6b8cf5-tz8m9\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.754545 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/047e2f42-86db-4fb3-ba7d-6927181dc49b-config-data\") pod \"horizon-596b6b8cf5-tz8m9\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.754593 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/047e2f42-86db-4fb3-ba7d-6927181dc49b-horizon-secret-key\") pod \"horizon-596b6b8cf5-tz8m9\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.754635 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/047e2f42-86db-4fb3-ba7d-6927181dc49b-scripts\") pod \"horizon-596b6b8cf5-tz8m9\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.754690 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/047e2f42-86db-4fb3-ba7d-6927181dc49b-logs\") pod \"horizon-596b6b8cf5-tz8m9\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.755174 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/047e2f42-86db-4fb3-ba7d-6927181dc49b-logs\") pod \"horizon-596b6b8cf5-tz8m9\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.756016 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/047e2f42-86db-4fb3-ba7d-6927181dc49b-scripts\") pod \"horizon-596b6b8cf5-tz8m9\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.762367 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/047e2f42-86db-4fb3-ba7d-6927181dc49b-config-data\") pod \"horizon-596b6b8cf5-tz8m9\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.781777 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b5tl\" (UniqueName: \"kubernetes.io/projected/047e2f42-86db-4fb3-ba7d-6927181dc49b-kube-api-access-7b5tl\") pod \"horizon-596b6b8cf5-tz8m9\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.786488 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/047e2f42-86db-4fb3-ba7d-6927181dc49b-horizon-secret-key\") pod \"horizon-596b6b8cf5-tz8m9\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.806473 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-8d56k"] Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.807907 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.810330 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.810569 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.811113 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-flhhv" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.835782 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8d56k"] Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.939237 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.940679 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-t8d7p"] Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.961564 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-scripts\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.961645 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-config-data\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.961671 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-combined-ca-bundle\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.961722 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/131bae56-5108-4750-8056-68133598a109-etc-machine-id\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.961796 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-db-sync-config-data\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.961972 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjpzw\" (UniqueName: \"kubernetes.io/projected/131bae56-5108-4750-8056-68133598a109-kube-api-access-bjpzw\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:33 crc kubenswrapper[4914]: I0127 14:04:33.986955 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2h5wx" event={"ID":"505474ad-b983-4001-b8b6-f55b1d077e08","Type":"ContainerStarted","Data":"7ddeadb13df51973c038fad3e1c0dff60ea0d5ae7bec3a55a214d297abfbec23"} Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.039301 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d266k" event={"ID":"e855aa58-430b-40b4-a5f1-d8abca86976f","Type":"ContainerStarted","Data":"e2bc8afa15b9715d930f1d631bac4f165a1567fa763f778fd96f02ded1d467f1"} Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.054168 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58d6b45967-kdfwt"] Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.069923 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-db-sync-config-data\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.069987 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjpzw\" (UniqueName: \"kubernetes.io/projected/131bae56-5108-4750-8056-68133598a109-kube-api-access-bjpzw\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.070053 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-scripts\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.070100 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-config-data\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.070118 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-combined-ca-bundle\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.070156 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/131bae56-5108-4750-8056-68133598a109-etc-machine-id\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.070261 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/131bae56-5108-4750-8056-68133598a109-etc-machine-id\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.083426 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-scripts\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.087165 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-db-sync-config-data\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.087958 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-combined-ca-bundle\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.102903 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-fbv2k"] Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.119591 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjpzw\" (UniqueName: \"kubernetes.io/projected/131bae56-5108-4750-8056-68133598a109-kube-api-access-bjpzw\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.127748 4914 generic.go:334] "Generic (PLEG): container finished" podID="09dfaf0f-e3cb-43e7-8e82-e060d5e38767" containerID="4e0ba9ffd674d35b874f0b5ef369f082eaa21a8f751875c0fc8029009ff08836" exitCode=0 Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.127881 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" event={"ID":"09dfaf0f-e3cb-43e7-8e82-e060d5e38767","Type":"ContainerDied","Data":"4e0ba9ffd674d35b874f0b5ef369f082eaa21a8f751875c0fc8029009ff08836"} Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.128804 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-config-data\") pod \"cinder-db-sync-8d56k\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.128852 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-d266k" podStartSLOduration=3.128810622 podStartE2EDuration="3.128810622s" podCreationTimestamp="2026-01-27 14:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:04:34.083963164 +0000 UTC m=+1232.396313249" watchObservedRunningTime="2026-01-27 14:04:34.128810622 +0000 UTC m=+1232.441160697" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.144468 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cwqrf" event={"ID":"07d55233-43ac-42a0-b604-e38f7bafa346","Type":"ContainerStarted","Data":"9a3bae91264cb032f1929f663a7cbf9c566396d93ff8948d2874ee44dd9fd39f"} Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.153171 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59bf577cbf-wxzn9" event={"ID":"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c","Type":"ContainerStarted","Data":"7e5c125575924aef91061a76859301cf6dd00f4bd0837b741de1ec6b9f4a1c58"} Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.159674 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8d56k" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.333628 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.618191 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-596b6b8cf5-tz8m9"] Jan 27 14:04:34 crc kubenswrapper[4914]: W0127 14:04:34.649707 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod047e2f42_86db_4fb3_ba7d_6927181dc49b.slice/crio-4848d6f5fd42b50dd2f7d06cb372bb33ceeb12492d11dd8e9a57c3fa10c3df02 WatchSource:0}: Error finding container 4848d6f5fd42b50dd2f7d06cb372bb33ceeb12492d11dd8e9a57c3fa10c3df02: Status 404 returned error can't find the container with id 4848d6f5fd42b50dd2f7d06cb372bb33ceeb12492d11dd8e9a57c3fa10c3df02 Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.712914 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.904722 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-dns-svc\") pod \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.904943 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-ovsdbserver-nb\") pod \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.905012 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v6mz\" (UniqueName: \"kubernetes.io/projected/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-kube-api-access-4v6mz\") pod \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.905059 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-ovsdbserver-sb\") pod \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.905164 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-config\") pod \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.905195 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-dns-swift-storage-0\") pod \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\" (UID: \"09dfaf0f-e3cb-43e7-8e82-e060d5e38767\") " Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.911377 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8d56k"] Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.913507 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-kube-api-access-4v6mz" (OuterVolumeSpecName: "kube-api-access-4v6mz") pod "09dfaf0f-e3cb-43e7-8e82-e060d5e38767" (UID: "09dfaf0f-e3cb-43e7-8e82-e060d5e38767"). InnerVolumeSpecName "kube-api-access-4v6mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.942293 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "09dfaf0f-e3cb-43e7-8e82-e060d5e38767" (UID: "09dfaf0f-e3cb-43e7-8e82-e060d5e38767"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.950748 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "09dfaf0f-e3cb-43e7-8e82-e060d5e38767" (UID: "09dfaf0f-e3cb-43e7-8e82-e060d5e38767"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.950968 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09dfaf0f-e3cb-43e7-8e82-e060d5e38767" (UID: "09dfaf0f-e3cb-43e7-8e82-e060d5e38767"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.959065 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-config" (OuterVolumeSpecName: "config") pod "09dfaf0f-e3cb-43e7-8e82-e060d5e38767" (UID: "09dfaf0f-e3cb-43e7-8e82-e060d5e38767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:34 crc kubenswrapper[4914]: I0127 14:04:34.970075 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "09dfaf0f-e3cb-43e7-8e82-e060d5e38767" (UID: "09dfaf0f-e3cb-43e7-8e82-e060d5e38767"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.008602 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.008647 4914 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.008662 4914 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.008672 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.008684 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v6mz\" (UniqueName: \"kubernetes.io/projected/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-kube-api-access-4v6mz\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.008695 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09dfaf0f-e3cb-43e7-8e82-e060d5e38767-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.069602 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.177224 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"516e350a-2ee5-47ab-a4de-395688e55039","Type":"ContainerStarted","Data":"9f81b88ea719990308989e8b199fd1ada452f3f8c397afcc2e6904b1cb76e0b4"} Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.179558 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-t8d7p" event={"ID":"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1","Type":"ContainerStarted","Data":"cebd09a8c949395b3a9f23c0b61f4017ca5fbe61ee1d3120edfef929567ede6e"} Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.179601 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-t8d7p" event={"ID":"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1","Type":"ContainerStarted","Data":"cc3edc9f9d649c31d23191adad989afd6914eb775ef0f941131d56aa66561b6a"} Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.183973 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"231e853e-1db9-4828-8494-424bedf8b7bf","Type":"ContainerStarted","Data":"529644773e238010dd6a99225cec80a6eea6f1a53f95f2fecad832bfacff101f"} Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.190228 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-596b6b8cf5-tz8m9" event={"ID":"047e2f42-86db-4fb3-ba7d-6927181dc49b","Type":"ContainerStarted","Data":"4848d6f5fd42b50dd2f7d06cb372bb33ceeb12492d11dd8e9a57c3fa10c3df02"} Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.200804 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" event={"ID":"09dfaf0f-e3cb-43e7-8e82-e060d5e38767","Type":"ContainerDied","Data":"19325beac5fbef5eed8f072b43ef8701607a837189fee51ac1a6a6a42157c012"} Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.200872 4914 scope.go:117] "RemoveContainer" containerID="4e0ba9ffd674d35b874f0b5ef369f082eaa21a8f751875c0fc8029009ff08836" Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.200971 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-dvj2m" Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.209760 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-t8d7p" podStartSLOduration=4.209738087 podStartE2EDuration="4.209738087s" podCreationTimestamp="2026-01-27 14:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:04:35.194182691 +0000 UTC m=+1233.506532776" watchObservedRunningTime="2026-01-27 14:04:35.209738087 +0000 UTC m=+1233.522088172" Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.220100 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8d56k" event={"ID":"131bae56-5108-4750-8056-68133598a109","Type":"ContainerStarted","Data":"074d88f3e34e2094bc80a5aefadd7ae525544076bd0afc5865cc240734ae40f8"} Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.226120 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58d6b45967-kdfwt" event={"ID":"994c43c4-f1e3-44f0-b2d4-9684d2abf02f","Type":"ContainerStarted","Data":"3384f46a587c2c7e51dc9cd0b3b1d0af8da7838d7e93d592b3023821ed8a4c25"} Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.227904 4914 generic.go:334] "Generic (PLEG): container finished" podID="972edb24-1cfb-4529-bc89-2bb9a89c5579" containerID="cabdc311e35f003f50386267c040ef94967edbdcd18341e1490798a55d210622" exitCode=0 Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.229888 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" event={"ID":"972edb24-1cfb-4529-bc89-2bb9a89c5579","Type":"ContainerDied","Data":"cabdc311e35f003f50386267c040ef94967edbdcd18341e1490798a55d210622"} Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.229931 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" event={"ID":"972edb24-1cfb-4529-bc89-2bb9a89c5579","Type":"ContainerStarted","Data":"3713cab33a152fa921aa68b2c20b0f6aa6a89c53758ad00e47602d21c21e2aac"} Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.339374 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-dvj2m"] Jan 27 14:04:35 crc kubenswrapper[4914]: I0127 14:04:35.358668 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-dvj2m"] Jan 27 14:04:36 crc kubenswrapper[4914]: I0127 14:04:36.335928 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09dfaf0f-e3cb-43e7-8e82-e060d5e38767" path="/var/lib/kubelet/pods/09dfaf0f-e3cb-43e7-8e82-e060d5e38767/volumes" Jan 27 14:04:36 crc kubenswrapper[4914]: I0127 14:04:36.337519 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:36 crc kubenswrapper[4914]: I0127 14:04:36.337547 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" event={"ID":"972edb24-1cfb-4529-bc89-2bb9a89c5579","Type":"ContainerStarted","Data":"4c77ef4f56b091564b71e02baa818192b820a2b689530436a9b6b957dce2bee2"} Jan 27 14:04:36 crc kubenswrapper[4914]: I0127 14:04:36.354348 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"516e350a-2ee5-47ab-a4de-395688e55039","Type":"ContainerStarted","Data":"fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9"} Jan 27 14:04:36 crc kubenswrapper[4914]: I0127 14:04:36.362224 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" podStartSLOduration=5.362203661 podStartE2EDuration="5.362203661s" podCreationTimestamp="2026-01-27 14:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:04:36.359742264 +0000 UTC m=+1234.672092349" watchObservedRunningTime="2026-01-27 14:04:36.362203661 +0000 UTC m=+1234.674553746" Jan 27 14:04:36 crc kubenswrapper[4914]: I0127 14:04:36.365882 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"231e853e-1db9-4828-8494-424bedf8b7bf","Type":"ContainerStarted","Data":"abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0"} Jan 27 14:04:36 crc kubenswrapper[4914]: I0127 14:04:36.366117 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="231e853e-1db9-4828-8494-424bedf8b7bf" containerName="glance-log" containerID="cri-o://abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0" gracePeriod=30 Jan 27 14:04:36 crc kubenswrapper[4914]: I0127 14:04:36.366476 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="231e853e-1db9-4828-8494-424bedf8b7bf" containerName="glance-httpd" containerID="cri-o://7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b" gracePeriod=30 Jan 27 14:04:36 crc kubenswrapper[4914]: I0127 14:04:36.387980 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.387961187 podStartE2EDuration="5.387961187s" podCreationTimestamp="2026-01-27 14:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:04:36.384972575 +0000 UTC m=+1234.697322660" watchObservedRunningTime="2026-01-27 14:04:36.387961187 +0000 UTC m=+1234.700311272" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.261262 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.361621 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/231e853e-1db9-4828-8494-424bedf8b7bf-logs\") pod \"231e853e-1db9-4828-8494-424bedf8b7bf\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.361699 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-scripts\") pod \"231e853e-1db9-4828-8494-424bedf8b7bf\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.361754 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87mnd\" (UniqueName: \"kubernetes.io/projected/231e853e-1db9-4828-8494-424bedf8b7bf-kube-api-access-87mnd\") pod \"231e853e-1db9-4828-8494-424bedf8b7bf\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.361822 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"231e853e-1db9-4828-8494-424bedf8b7bf\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.361911 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-internal-tls-certs\") pod \"231e853e-1db9-4828-8494-424bedf8b7bf\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.361942 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/231e853e-1db9-4828-8494-424bedf8b7bf-httpd-run\") pod \"231e853e-1db9-4828-8494-424bedf8b7bf\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.362024 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-combined-ca-bundle\") pod \"231e853e-1db9-4828-8494-424bedf8b7bf\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.362047 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-config-data\") pod \"231e853e-1db9-4828-8494-424bedf8b7bf\" (UID: \"231e853e-1db9-4828-8494-424bedf8b7bf\") " Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.364652 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/231e853e-1db9-4828-8494-424bedf8b7bf-logs" (OuterVolumeSpecName: "logs") pod "231e853e-1db9-4828-8494-424bedf8b7bf" (UID: "231e853e-1db9-4828-8494-424bedf8b7bf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.364873 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/231e853e-1db9-4828-8494-424bedf8b7bf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "231e853e-1db9-4828-8494-424bedf8b7bf" (UID: "231e853e-1db9-4828-8494-424bedf8b7bf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.371473 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "231e853e-1db9-4828-8494-424bedf8b7bf" (UID: "231e853e-1db9-4828-8494-424bedf8b7bf"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.376980 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-scripts" (OuterVolumeSpecName: "scripts") pod "231e853e-1db9-4828-8494-424bedf8b7bf" (UID: "231e853e-1db9-4828-8494-424bedf8b7bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.377443 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/231e853e-1db9-4828-8494-424bedf8b7bf-kube-api-access-87mnd" (OuterVolumeSpecName: "kube-api-access-87mnd") pod "231e853e-1db9-4828-8494-424bedf8b7bf" (UID: "231e853e-1db9-4828-8494-424bedf8b7bf"). InnerVolumeSpecName "kube-api-access-87mnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.419801 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"516e350a-2ee5-47ab-a4de-395688e55039","Type":"ContainerStarted","Data":"07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7"} Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.419954 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="516e350a-2ee5-47ab-a4de-395688e55039" containerName="glance-log" containerID="cri-o://fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9" gracePeriod=30 Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.420279 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="516e350a-2ee5-47ab-a4de-395688e55039" containerName="glance-httpd" containerID="cri-o://07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7" gracePeriod=30 Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.432893 4914 generic.go:334] "Generic (PLEG): container finished" podID="231e853e-1db9-4828-8494-424bedf8b7bf" containerID="7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b" exitCode=143 Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.432930 4914 generic.go:334] "Generic (PLEG): container finished" podID="231e853e-1db9-4828-8494-424bedf8b7bf" containerID="abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0" exitCode=143 Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.434023 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.434205 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"231e853e-1db9-4828-8494-424bedf8b7bf","Type":"ContainerDied","Data":"7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b"} Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.434233 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"231e853e-1db9-4828-8494-424bedf8b7bf","Type":"ContainerDied","Data":"abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0"} Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.434244 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"231e853e-1db9-4828-8494-424bedf8b7bf","Type":"ContainerDied","Data":"529644773e238010dd6a99225cec80a6eea6f1a53f95f2fecad832bfacff101f"} Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.434261 4914 scope.go:117] "RemoveContainer" containerID="7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.438368 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "231e853e-1db9-4828-8494-424bedf8b7bf" (UID: "231e853e-1db9-4828-8494-424bedf8b7bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.476282 4914 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/231e853e-1db9-4828-8494-424bedf8b7bf-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.476308 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.476322 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/231e853e-1db9-4828-8494-424bedf8b7bf-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.476331 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.476342 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87mnd\" (UniqueName: \"kubernetes.io/projected/231e853e-1db9-4828-8494-424bedf8b7bf-kube-api-access-87mnd\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.476364 4914 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.483353 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.483326746 podStartE2EDuration="6.483326746s" podCreationTimestamp="2026-01-27 14:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:04:37.445022827 +0000 UTC m=+1235.757372922" watchObservedRunningTime="2026-01-27 14:04:37.483326746 +0000 UTC m=+1235.795676841" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.498984 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-config-data" (OuterVolumeSpecName: "config-data") pod "231e853e-1db9-4828-8494-424bedf8b7bf" (UID: "231e853e-1db9-4828-8494-424bedf8b7bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.513783 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "231e853e-1db9-4828-8494-424bedf8b7bf" (UID: "231e853e-1db9-4828-8494-424bedf8b7bf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.520023 4914 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.583142 4914 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.583175 4914 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.583186 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231e853e-1db9-4828-8494-424bedf8b7bf-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.704926 4914 scope.go:117] "RemoveContainer" containerID="abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.783413 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.800015 4914 scope.go:117] "RemoveContainer" containerID="7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b" Jan 27 14:04:37 crc kubenswrapper[4914]: E0127 14:04:37.800716 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b\": container with ID starting with 7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b not found: ID does not exist" containerID="7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.800761 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b"} err="failed to get container status \"7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b\": rpc error: code = NotFound desc = could not find container \"7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b\": container with ID starting with 7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b not found: ID does not exist" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.800803 4914 scope.go:117] "RemoveContainer" containerID="abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0" Jan 27 14:04:37 crc kubenswrapper[4914]: E0127 14:04:37.801531 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0\": container with ID starting with abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0 not found: ID does not exist" containerID="abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.801567 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0"} err="failed to get container status \"abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0\": rpc error: code = NotFound desc = could not find container \"abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0\": container with ID starting with abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0 not found: ID does not exist" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.801598 4914 scope.go:117] "RemoveContainer" containerID="7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.805699 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.808303 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b"} err="failed to get container status \"7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b\": rpc error: code = NotFound desc = could not find container \"7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b\": container with ID starting with 7feb7c2802038c423e5d02e9d16ffbee166d0247d0d89a1733dae0fb1440138b not found: ID does not exist" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.808340 4914 scope.go:117] "RemoveContainer" containerID="abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.809402 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0"} err="failed to get container status \"abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0\": rpc error: code = NotFound desc = could not find container \"abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0\": container with ID starting with abf03156356156424651c7f5997f6c1bd94b139a18d6ec82656be9f3d044f1a0 not found: ID does not exist" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.815265 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:04:37 crc kubenswrapper[4914]: E0127 14:04:37.815693 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09dfaf0f-e3cb-43e7-8e82-e060d5e38767" containerName="init" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.815725 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="09dfaf0f-e3cb-43e7-8e82-e060d5e38767" containerName="init" Jan 27 14:04:37 crc kubenswrapper[4914]: E0127 14:04:37.815757 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231e853e-1db9-4828-8494-424bedf8b7bf" containerName="glance-httpd" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.815765 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="231e853e-1db9-4828-8494-424bedf8b7bf" containerName="glance-httpd" Jan 27 14:04:37 crc kubenswrapper[4914]: E0127 14:04:37.815783 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231e853e-1db9-4828-8494-424bedf8b7bf" containerName="glance-log" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.815790 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="231e853e-1db9-4828-8494-424bedf8b7bf" containerName="glance-log" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.816032 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="09dfaf0f-e3cb-43e7-8e82-e060d5e38767" containerName="init" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.816061 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="231e853e-1db9-4828-8494-424bedf8b7bf" containerName="glance-log" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.816077 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="231e853e-1db9-4828-8494-424bedf8b7bf" containerName="glance-httpd" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.818559 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.826229 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.826302 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.850225 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:04:37 crc kubenswrapper[4914]: E0127 14:04:37.938651 4914 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode855aa58_430b_40b4_a5f1_d8abca86976f.slice/crio-e2bc8afa15b9715d930f1d631bac4f165a1567fa763f778fd96f02ded1d467f1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod231e853e_1db9_4828_8494_424bedf8b7bf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod231e853e_1db9_4828_8494_424bedf8b7bf.slice/crio-529644773e238010dd6a99225cec80a6eea6f1a53f95f2fecad832bfacff101f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode855aa58_430b_40b4_a5f1_d8abca86976f.slice/crio-conmon-e2bc8afa15b9715d930f1d631bac4f165a1567fa763f778fd96f02ded1d467f1.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.989496 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d5d70aa-dabc-4de3-859e-01529e77123b-logs\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.989627 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.989694 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.989774 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.989818 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.989883 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xvtg\" (UniqueName: \"kubernetes.io/projected/4d5d70aa-dabc-4de3-859e-01529e77123b-kube-api-access-6xvtg\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.989915 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d5d70aa-dabc-4de3-859e-01529e77123b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:37 crc kubenswrapper[4914]: I0127 14:04:37.989943 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.091210 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.091298 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.091318 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.091341 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xvtg\" (UniqueName: \"kubernetes.io/projected/4d5d70aa-dabc-4de3-859e-01529e77123b-kube-api-access-6xvtg\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.091358 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d5d70aa-dabc-4de3-859e-01529e77123b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.091378 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.091403 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d5d70aa-dabc-4de3-859e-01529e77123b-logs\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.091424 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.091560 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.092092 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d5d70aa-dabc-4de3-859e-01529e77123b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.092521 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d5d70aa-dabc-4de3-859e-01529e77123b-logs\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.092654 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.098307 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.098668 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.100147 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.107601 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xvtg\" (UniqueName: \"kubernetes.io/projected/4d5d70aa-dabc-4de3-859e-01529e77123b-kube-api-access-6xvtg\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.114093 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.133560 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.155320 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.192942 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-public-tls-certs\") pod \"516e350a-2ee5-47ab-a4de-395688e55039\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.193309 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/516e350a-2ee5-47ab-a4de-395688e55039-logs\") pod \"516e350a-2ee5-47ab-a4de-395688e55039\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.193400 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"516e350a-2ee5-47ab-a4de-395688e55039\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.193455 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/516e350a-2ee5-47ab-a4de-395688e55039-httpd-run\") pod \"516e350a-2ee5-47ab-a4de-395688e55039\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.193478 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-config-data\") pod \"516e350a-2ee5-47ab-a4de-395688e55039\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.193524 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-combined-ca-bundle\") pod \"516e350a-2ee5-47ab-a4de-395688e55039\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.193575 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28v94\" (UniqueName: \"kubernetes.io/projected/516e350a-2ee5-47ab-a4de-395688e55039-kube-api-access-28v94\") pod \"516e350a-2ee5-47ab-a4de-395688e55039\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.193629 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-scripts\") pod \"516e350a-2ee5-47ab-a4de-395688e55039\" (UID: \"516e350a-2ee5-47ab-a4de-395688e55039\") " Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.196967 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/516e350a-2ee5-47ab-a4de-395688e55039-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "516e350a-2ee5-47ab-a4de-395688e55039" (UID: "516e350a-2ee5-47ab-a4de-395688e55039"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.197256 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/516e350a-2ee5-47ab-a4de-395688e55039-logs" (OuterVolumeSpecName: "logs") pod "516e350a-2ee5-47ab-a4de-395688e55039" (UID: "516e350a-2ee5-47ab-a4de-395688e55039"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.201786 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-scripts" (OuterVolumeSpecName: "scripts") pod "516e350a-2ee5-47ab-a4de-395688e55039" (UID: "516e350a-2ee5-47ab-a4de-395688e55039"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.203164 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "516e350a-2ee5-47ab-a4de-395688e55039" (UID: "516e350a-2ee5-47ab-a4de-395688e55039"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.211034 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/516e350a-2ee5-47ab-a4de-395688e55039-kube-api-access-28v94" (OuterVolumeSpecName: "kube-api-access-28v94") pod "516e350a-2ee5-47ab-a4de-395688e55039" (UID: "516e350a-2ee5-47ab-a4de-395688e55039"). InnerVolumeSpecName "kube-api-access-28v94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.240806 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "516e350a-2ee5-47ab-a4de-395688e55039" (UID: "516e350a-2ee5-47ab-a4de-395688e55039"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.263401 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "516e350a-2ee5-47ab-a4de-395688e55039" (UID: "516e350a-2ee5-47ab-a4de-395688e55039"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.270397 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-config-data" (OuterVolumeSpecName: "config-data") pod "516e350a-2ee5-47ab-a4de-395688e55039" (UID: "516e350a-2ee5-47ab-a4de-395688e55039"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.295438 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28v94\" (UniqueName: \"kubernetes.io/projected/516e350a-2ee5-47ab-a4de-395688e55039-kube-api-access-28v94\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.295461 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.295471 4914 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.295479 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/516e350a-2ee5-47ab-a4de-395688e55039-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.295507 4914 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.295515 4914 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/516e350a-2ee5-47ab-a4de-395688e55039-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.295525 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.295533 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516e350a-2ee5-47ab-a4de-395688e55039-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.308584 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="231e853e-1db9-4828-8494-424bedf8b7bf" path="/var/lib/kubelet/pods/231e853e-1db9-4828-8494-424bedf8b7bf/volumes" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.325409 4914 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.402675 4914 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.451529 4914 generic.go:334] "Generic (PLEG): container finished" podID="e855aa58-430b-40b4-a5f1-d8abca86976f" containerID="e2bc8afa15b9715d930f1d631bac4f165a1567fa763f778fd96f02ded1d467f1" exitCode=0 Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.451645 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d266k" event={"ID":"e855aa58-430b-40b4-a5f1-d8abca86976f","Type":"ContainerDied","Data":"e2bc8afa15b9715d930f1d631bac4f165a1567fa763f778fd96f02ded1d467f1"} Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.453750 4914 generic.go:334] "Generic (PLEG): container finished" podID="516e350a-2ee5-47ab-a4de-395688e55039" containerID="07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7" exitCode=143 Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.453782 4914 generic.go:334] "Generic (PLEG): container finished" podID="516e350a-2ee5-47ab-a4de-395688e55039" containerID="fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9" exitCode=143 Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.453824 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"516e350a-2ee5-47ab-a4de-395688e55039","Type":"ContainerDied","Data":"07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7"} Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.453873 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"516e350a-2ee5-47ab-a4de-395688e55039","Type":"ContainerDied","Data":"fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9"} Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.453888 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"516e350a-2ee5-47ab-a4de-395688e55039","Type":"ContainerDied","Data":"9f81b88ea719990308989e8b199fd1ada452f3f8c397afcc2e6904b1cb76e0b4"} Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.453907 4914 scope.go:117] "RemoveContainer" containerID="07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.454136 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.520148 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.521884 4914 scope.go:117] "RemoveContainer" containerID="fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.530839 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.538243 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:04:38 crc kubenswrapper[4914]: E0127 14:04:38.539183 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="516e350a-2ee5-47ab-a4de-395688e55039" containerName="glance-log" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.539211 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="516e350a-2ee5-47ab-a4de-395688e55039" containerName="glance-log" Jan 27 14:04:38 crc kubenswrapper[4914]: E0127 14:04:38.539809 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="516e350a-2ee5-47ab-a4de-395688e55039" containerName="glance-httpd" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.539907 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="516e350a-2ee5-47ab-a4de-395688e55039" containerName="glance-httpd" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.540207 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="516e350a-2ee5-47ab-a4de-395688e55039" containerName="glance-httpd" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.540224 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="516e350a-2ee5-47ab-a4de-395688e55039" containerName="glance-log" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.541677 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.544243 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.545321 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.548223 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.562522 4914 scope.go:117] "RemoveContainer" containerID="07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7" Jan 27 14:04:38 crc kubenswrapper[4914]: E0127 14:04:38.565735 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7\": container with ID starting with 07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7 not found: ID does not exist" containerID="07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.565771 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7"} err="failed to get container status \"07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7\": rpc error: code = NotFound desc = could not find container \"07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7\": container with ID starting with 07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7 not found: ID does not exist" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.565802 4914 scope.go:117] "RemoveContainer" containerID="fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9" Jan 27 14:04:38 crc kubenswrapper[4914]: E0127 14:04:38.566137 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9\": container with ID starting with fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9 not found: ID does not exist" containerID="fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.566209 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9"} err="failed to get container status \"fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9\": rpc error: code = NotFound desc = could not find container \"fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9\": container with ID starting with fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9 not found: ID does not exist" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.566254 4914 scope.go:117] "RemoveContainer" containerID="07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.566543 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7"} err="failed to get container status \"07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7\": rpc error: code = NotFound desc = could not find container \"07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7\": container with ID starting with 07edbeddab5001128204db279f69f03facee826000a2b3351496b1c94bea7ca7 not found: ID does not exist" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.566565 4914 scope.go:117] "RemoveContainer" containerID="fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.567154 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9"} err="failed to get container status \"fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9\": rpc error: code = NotFound desc = could not find container \"fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9\": container with ID starting with fc162e21e675c5b6f0e6a96a92a119c7f3e6cd3267f71d9e68037afceb49f9b9 not found: ID does not exist" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.707167 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5d7b3a4-0e06-481c-8231-9aa12929da2c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.707227 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.707409 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.707484 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-config-data\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.707519 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gc2l\" (UniqueName: \"kubernetes.io/projected/a5d7b3a4-0e06-481c-8231-9aa12929da2c-kube-api-access-5gc2l\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.707867 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5d7b3a4-0e06-481c-8231-9aa12929da2c-logs\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.707975 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.708029 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-scripts\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.754727 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.811948 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-config-data\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.811997 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gc2l\" (UniqueName: \"kubernetes.io/projected/a5d7b3a4-0e06-481c-8231-9aa12929da2c-kube-api-access-5gc2l\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.812106 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5d7b3a4-0e06-481c-8231-9aa12929da2c-logs\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.812141 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.812181 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-scripts\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.812203 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5d7b3a4-0e06-481c-8231-9aa12929da2c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.812223 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.812746 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.812900 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.815324 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5d7b3a4-0e06-481c-8231-9aa12929da2c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.816819 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5d7b3a4-0e06-481c-8231-9aa12929da2c-logs\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.820125 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-config-data\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.822888 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-scripts\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.824842 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.829531 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.834699 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gc2l\" (UniqueName: \"kubernetes.io/projected/a5d7b3a4-0e06-481c-8231-9aa12929da2c-kube-api-access-5gc2l\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.846929 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " pod="openstack/glance-default-external-api-0" Jan 27 14:04:38 crc kubenswrapper[4914]: I0127 14:04:38.894614 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.321220 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="516e350a-2ee5-47ab-a4de-395688e55039" path="/var/lib/kubelet/pods/516e350a-2ee5-47ab-a4de-395688e55039/volumes" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.433663 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59bf577cbf-wxzn9"] Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.491457 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75b4645c86-9r9q2"] Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.493206 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.502183 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.523924 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.544739 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75b4645c86-9r9q2"] Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.622086 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-596b6b8cf5-tz8m9"] Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.645554 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.667612 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6bb6c77c5d-pwr6c"] Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.669725 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.670316 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1dd59938-4cf8-4632-8b1c-237cf981fd5f-config-data\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.670372 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-combined-ca-bundle\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.670434 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpxjb\" (UniqueName: \"kubernetes.io/projected/1dd59938-4cf8-4632-8b1c-237cf981fd5f-kube-api-access-fpxjb\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.670541 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dd59938-4cf8-4632-8b1c-237cf981fd5f-logs\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.670597 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1dd59938-4cf8-4632-8b1c-237cf981fd5f-scripts\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.670637 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-horizon-tls-certs\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.670949 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-horizon-secret-key\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.679627 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6bb6c77c5d-pwr6c"] Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.772634 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpxjb\" (UniqueName: \"kubernetes.io/projected/1dd59938-4cf8-4632-8b1c-237cf981fd5f-kube-api-access-fpxjb\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.772694 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dd59938-4cf8-4632-8b1c-237cf981fd5f-logs\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.772717 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1dd59938-4cf8-4632-8b1c-237cf981fd5f-scripts\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.772736 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-horizon-tls-certs\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.772754 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7209cbb-e572-463b-bb43-9805cd58ea57-logs\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.772771 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7209cbb-e572-463b-bb43-9805cd58ea57-config-data\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.772860 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffn79\" (UniqueName: \"kubernetes.io/projected/d7209cbb-e572-463b-bb43-9805cd58ea57-kube-api-access-ffn79\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.772880 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7209cbb-e572-463b-bb43-9805cd58ea57-horizon-secret-key\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.772895 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-horizon-secret-key\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.772923 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7209cbb-e572-463b-bb43-9805cd58ea57-scripts\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.772941 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1dd59938-4cf8-4632-8b1c-237cf981fd5f-config-data\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.772965 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-combined-ca-bundle\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.772998 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7209cbb-e572-463b-bb43-9805cd58ea57-combined-ca-bundle\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.773014 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7209cbb-e572-463b-bb43-9805cd58ea57-horizon-tls-certs\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.773655 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dd59938-4cf8-4632-8b1c-237cf981fd5f-logs\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.774148 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1dd59938-4cf8-4632-8b1c-237cf981fd5f-scripts\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.776773 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1dd59938-4cf8-4632-8b1c-237cf981fd5f-config-data\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.780112 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-horizon-tls-certs\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.780428 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-combined-ca-bundle\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.799083 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpxjb\" (UniqueName: \"kubernetes.io/projected/1dd59938-4cf8-4632-8b1c-237cf981fd5f-kube-api-access-fpxjb\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.808318 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-horizon-secret-key\") pod \"horizon-75b4645c86-9r9q2\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.847983 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.874256 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffn79\" (UniqueName: \"kubernetes.io/projected/d7209cbb-e572-463b-bb43-9805cd58ea57-kube-api-access-ffn79\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.874299 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7209cbb-e572-463b-bb43-9805cd58ea57-horizon-secret-key\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.874336 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7209cbb-e572-463b-bb43-9805cd58ea57-scripts\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.874384 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7209cbb-e572-463b-bb43-9805cd58ea57-combined-ca-bundle\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.874401 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7209cbb-e572-463b-bb43-9805cd58ea57-horizon-tls-certs\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.874442 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7209cbb-e572-463b-bb43-9805cd58ea57-logs\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.874481 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7209cbb-e572-463b-bb43-9805cd58ea57-config-data\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.875628 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7209cbb-e572-463b-bb43-9805cd58ea57-config-data\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.875720 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7209cbb-e572-463b-bb43-9805cd58ea57-scripts\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.875918 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7209cbb-e572-463b-bb43-9805cd58ea57-logs\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.879782 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7209cbb-e572-463b-bb43-9805cd58ea57-horizon-secret-key\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.880747 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7209cbb-e572-463b-bb43-9805cd58ea57-combined-ca-bundle\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.881510 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7209cbb-e572-463b-bb43-9805cd58ea57-horizon-tls-certs\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:40 crc kubenswrapper[4914]: I0127 14:04:40.891793 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffn79\" (UniqueName: \"kubernetes.io/projected/d7209cbb-e572-463b-bb43-9805cd58ea57-kube-api-access-ffn79\") pod \"horizon-6bb6c77c5d-pwr6c\" (UID: \"d7209cbb-e572-463b-bb43-9805cd58ea57\") " pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:41 crc kubenswrapper[4914]: I0127 14:04:41.000576 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:04:42 crc kubenswrapper[4914]: I0127 14:04:42.255039 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:04:42 crc kubenswrapper[4914]: I0127 14:04:42.327864 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-tc2gg"] Jan 27 14:04:42 crc kubenswrapper[4914]: I0127 14:04:42.328112 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" podUID="829a344f-0e48-4d51-aa5d-31dd6b0fa066" containerName="dnsmasq-dns" containerID="cri-o://a84ae35c8affb78fd7d98cbe1a18bbb2b9d889010ad08ba417b02585a97e9000" gracePeriod=10 Jan 27 14:04:42 crc kubenswrapper[4914]: I0127 14:04:42.524442 4914 generic.go:334] "Generic (PLEG): container finished" podID="829a344f-0e48-4d51-aa5d-31dd6b0fa066" containerID="a84ae35c8affb78fd7d98cbe1a18bbb2b9d889010ad08ba417b02585a97e9000" exitCode=0 Jan 27 14:04:42 crc kubenswrapper[4914]: I0127 14:04:42.524482 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" event={"ID":"829a344f-0e48-4d51-aa5d-31dd6b0fa066","Type":"ContainerDied","Data":"a84ae35c8affb78fd7d98cbe1a18bbb2b9d889010ad08ba417b02585a97e9000"} Jan 27 14:04:44 crc kubenswrapper[4914]: I0127 14:04:44.136879 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" podUID="829a344f-0e48-4d51-aa5d-31dd6b0fa066" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Jan 27 14:04:49 crc kubenswrapper[4914]: I0127 14:04:49.137008 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" podUID="829a344f-0e48-4d51-aa5d-31dd6b0fa066" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Jan 27 14:04:51 crc kubenswrapper[4914]: E0127 14:04:51.328423 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7" Jan 27 14:04:51 crc kubenswrapper[4914]: E0127 14:04:51.329055 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n687h5f9hbch5d9h665h5bfh7dh5d7hdbh648h5d9h5bbh587h5ffh58fh67ch698h547h589h55bh568h65h68ch5c8hbh5cbh5bfh69h8bh6fh89h58fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7b5tl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-596b6b8cf5-tz8m9_openstack(047e2f42-86db-4fb3-ba7d-6927181dc49b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:04:51 crc kubenswrapper[4914]: E0127 14:04:51.334325 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7\\\"\"]" pod="openstack/horizon-596b6b8cf5-tz8m9" podUID="047e2f42-86db-4fb3-ba7d-6927181dc49b" Jan 27 14:04:53 crc kubenswrapper[4914]: E0127 14:04:53.548588 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b" Jan 27 14:04:53 crc kubenswrapper[4914]: E0127 14:04:53.549060 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qk7xx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-2h5wx_openstack(505474ad-b983-4001-b8b6-f55b1d077e08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:04:53 crc kubenswrapper[4914]: E0127 14:04:53.550337 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-2h5wx" podUID="505474ad-b983-4001-b8b6-f55b1d077e08" Jan 27 14:04:53 crc kubenswrapper[4914]: E0127 14:04:53.710267 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b\\\"\"" pod="openstack/placement-db-sync-2h5wx" podUID="505474ad-b983-4001-b8b6-f55b1d077e08" Jan 27 14:04:53 crc kubenswrapper[4914]: E0127 14:04:53.930611 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777" Jan 27 14:04:53 crc kubenswrapper[4914]: E0127 14:04:53.930807 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b9h5dh9ch76h5c5h7ch5bfhfh55fh9h656h565h6bh596h89h656h8fh5cdh667hb8h6dhd6h8fh5f4h54bh557hd9h668hf5h565h68ch7bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrldw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ec53709e-df2b-4fc9-b9ac-6e144a262455): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:04:53 crc kubenswrapper[4914]: E0127 14:04:53.946310 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7" Jan 27 14:04:53 crc kubenswrapper[4914]: E0127 14:04:53.946494 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b4h5cbh696h5c4h588h5cbh5b4h56dhfdh59bh64fh67bh66fh85h544h647h68h5ffh9dh68dh54dh66fh5cbh5f6h599h675h55h79h8hbhfch564q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hpgnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-59bf577cbf-wxzn9_openstack(7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:04:53 crc kubenswrapper[4914]: E0127 14:04:53.948405 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7\\\"\"]" pod="openstack/horizon-59bf577cbf-wxzn9" podUID="7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.038230 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.045369 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.059103 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206180 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/047e2f42-86db-4fb3-ba7d-6927181dc49b-config-data\") pod \"047e2f42-86db-4fb3-ba7d-6927181dc49b\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206248 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-config\") pod \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206311 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-scripts\") pod \"e855aa58-430b-40b4-a5f1-d8abca86976f\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206341 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-dns-svc\") pod \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206434 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxjzc\" (UniqueName: \"kubernetes.io/projected/e855aa58-430b-40b4-a5f1-d8abca86976f-kube-api-access-dxjzc\") pod \"e855aa58-430b-40b4-a5f1-d8abca86976f\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206472 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-ovsdbserver-nb\") pod \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206511 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-fernet-keys\") pod \"e855aa58-430b-40b4-a5f1-d8abca86976f\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206544 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/047e2f42-86db-4fb3-ba7d-6927181dc49b-logs\") pod \"047e2f42-86db-4fb3-ba7d-6927181dc49b\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206568 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/047e2f42-86db-4fb3-ba7d-6927181dc49b-scripts\") pod \"047e2f42-86db-4fb3-ba7d-6927181dc49b\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206594 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-config-data\") pod \"e855aa58-430b-40b4-a5f1-d8abca86976f\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206626 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-credential-keys\") pod \"e855aa58-430b-40b4-a5f1-d8abca86976f\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206655 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7hzt\" (UniqueName: \"kubernetes.io/projected/829a344f-0e48-4d51-aa5d-31dd6b0fa066-kube-api-access-m7hzt\") pod \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206688 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b5tl\" (UniqueName: \"kubernetes.io/projected/047e2f42-86db-4fb3-ba7d-6927181dc49b-kube-api-access-7b5tl\") pod \"047e2f42-86db-4fb3-ba7d-6927181dc49b\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206712 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-combined-ca-bundle\") pod \"e855aa58-430b-40b4-a5f1-d8abca86976f\" (UID: \"e855aa58-430b-40b4-a5f1-d8abca86976f\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206751 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-ovsdbserver-sb\") pod \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206779 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-dns-swift-storage-0\") pod \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\" (UID: \"829a344f-0e48-4d51-aa5d-31dd6b0fa066\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.206849 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/047e2f42-86db-4fb3-ba7d-6927181dc49b-horizon-secret-key\") pod \"047e2f42-86db-4fb3-ba7d-6927181dc49b\" (UID: \"047e2f42-86db-4fb3-ba7d-6927181dc49b\") " Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.207493 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/047e2f42-86db-4fb3-ba7d-6927181dc49b-scripts" (OuterVolumeSpecName: "scripts") pod "047e2f42-86db-4fb3-ba7d-6927181dc49b" (UID: "047e2f42-86db-4fb3-ba7d-6927181dc49b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.207510 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/047e2f42-86db-4fb3-ba7d-6927181dc49b-config-data" (OuterVolumeSpecName: "config-data") pod "047e2f42-86db-4fb3-ba7d-6927181dc49b" (UID: "047e2f42-86db-4fb3-ba7d-6927181dc49b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.210171 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/047e2f42-86db-4fb3-ba7d-6927181dc49b-logs" (OuterVolumeSpecName: "logs") pod "047e2f42-86db-4fb3-ba7d-6927181dc49b" (UID: "047e2f42-86db-4fb3-ba7d-6927181dc49b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.213027 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/047e2f42-86db-4fb3-ba7d-6927181dc49b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "047e2f42-86db-4fb3-ba7d-6927181dc49b" (UID: "047e2f42-86db-4fb3-ba7d-6927181dc49b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.214348 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829a344f-0e48-4d51-aa5d-31dd6b0fa066-kube-api-access-m7hzt" (OuterVolumeSpecName: "kube-api-access-m7hzt") pod "829a344f-0e48-4d51-aa5d-31dd6b0fa066" (UID: "829a344f-0e48-4d51-aa5d-31dd6b0fa066"). InnerVolumeSpecName "kube-api-access-m7hzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.214579 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e855aa58-430b-40b4-a5f1-d8abca86976f" (UID: "e855aa58-430b-40b4-a5f1-d8abca86976f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.214654 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e855aa58-430b-40b4-a5f1-d8abca86976f" (UID: "e855aa58-430b-40b4-a5f1-d8abca86976f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.214982 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/047e2f42-86db-4fb3-ba7d-6927181dc49b-kube-api-access-7b5tl" (OuterVolumeSpecName: "kube-api-access-7b5tl") pod "047e2f42-86db-4fb3-ba7d-6927181dc49b" (UID: "047e2f42-86db-4fb3-ba7d-6927181dc49b"). InnerVolumeSpecName "kube-api-access-7b5tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.216306 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-scripts" (OuterVolumeSpecName: "scripts") pod "e855aa58-430b-40b4-a5f1-d8abca86976f" (UID: "e855aa58-430b-40b4-a5f1-d8abca86976f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.219073 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e855aa58-430b-40b4-a5f1-d8abca86976f-kube-api-access-dxjzc" (OuterVolumeSpecName: "kube-api-access-dxjzc") pod "e855aa58-430b-40b4-a5f1-d8abca86976f" (UID: "e855aa58-430b-40b4-a5f1-d8abca86976f"). InnerVolumeSpecName "kube-api-access-dxjzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.247714 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-config-data" (OuterVolumeSpecName: "config-data") pod "e855aa58-430b-40b4-a5f1-d8abca86976f" (UID: "e855aa58-430b-40b4-a5f1-d8abca86976f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.259214 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "829a344f-0e48-4d51-aa5d-31dd6b0fa066" (UID: "829a344f-0e48-4d51-aa5d-31dd6b0fa066"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.260329 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e855aa58-430b-40b4-a5f1-d8abca86976f" (UID: "e855aa58-430b-40b4-a5f1-d8abca86976f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.265923 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "829a344f-0e48-4d51-aa5d-31dd6b0fa066" (UID: "829a344f-0e48-4d51-aa5d-31dd6b0fa066"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.266248 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "829a344f-0e48-4d51-aa5d-31dd6b0fa066" (UID: "829a344f-0e48-4d51-aa5d-31dd6b0fa066"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.267336 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-config" (OuterVolumeSpecName: "config") pod "829a344f-0e48-4d51-aa5d-31dd6b0fa066" (UID: "829a344f-0e48-4d51-aa5d-31dd6b0fa066"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.278800 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "829a344f-0e48-4d51-aa5d-31dd6b0fa066" (UID: "829a344f-0e48-4d51-aa5d-31dd6b0fa066"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.308479 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.308860 4914 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.308877 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxjzc\" (UniqueName: \"kubernetes.io/projected/e855aa58-430b-40b4-a5f1-d8abca86976f-kube-api-access-dxjzc\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.308890 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.308904 4914 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.308916 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/047e2f42-86db-4fb3-ba7d-6927181dc49b-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.308927 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/047e2f42-86db-4fb3-ba7d-6927181dc49b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.308937 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.308946 4914 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.308957 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7hzt\" (UniqueName: \"kubernetes.io/projected/829a344f-0e48-4d51-aa5d-31dd6b0fa066-kube-api-access-m7hzt\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.308972 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b5tl\" (UniqueName: \"kubernetes.io/projected/047e2f42-86db-4fb3-ba7d-6927181dc49b-kube-api-access-7b5tl\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.308983 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e855aa58-430b-40b4-a5f1-d8abca86976f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.308993 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.309004 4914 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.309015 4914 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/047e2f42-86db-4fb3-ba7d-6927181dc49b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.309025 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/047e2f42-86db-4fb3-ba7d-6927181dc49b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.309036 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/829a344f-0e48-4d51-aa5d-31dd6b0fa066-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.716152 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.716157 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-tc2gg" event={"ID":"829a344f-0e48-4d51-aa5d-31dd6b0fa066","Type":"ContainerDied","Data":"4cac61a9d8787af8b9e72e805431f56166f2ed807c9119de5323a6d6b5b7aa5b"} Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.716291 4914 scope.go:117] "RemoveContainer" containerID="a84ae35c8affb78fd7d98cbe1a18bbb2b9d889010ad08ba417b02585a97e9000" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.717589 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d5d70aa-dabc-4de3-859e-01529e77123b","Type":"ContainerStarted","Data":"ce009e60e5fd935b1c634e183cec512c4f09b3b727f66223e05742ce8ba802dd"} Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.719073 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d266k" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.719175 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d266k" event={"ID":"e855aa58-430b-40b4-a5f1-d8abca86976f","Type":"ContainerDied","Data":"32db439bdb64d0294c9d80aa2b5d1ab91ef3686b25ee855aa5ea2cf56f7cedf8"} Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.719208 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32db439bdb64d0294c9d80aa2b5d1ab91ef3686b25ee855aa5ea2cf56f7cedf8" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.722488 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596b6b8cf5-tz8m9" Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.722645 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-596b6b8cf5-tz8m9" event={"ID":"047e2f42-86db-4fb3-ba7d-6927181dc49b","Type":"ContainerDied","Data":"4848d6f5fd42b50dd2f7d06cb372bb33ceeb12492d11dd8e9a57c3fa10c3df02"} Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.761633 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-tc2gg"] Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.773869 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-tc2gg"] Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.802806 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-596b6b8cf5-tz8m9"] Jan 27 14:04:54 crc kubenswrapper[4914]: I0127 14:04:54.812049 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-596b6b8cf5-tz8m9"] Jan 27 14:04:54 crc kubenswrapper[4914]: E0127 14:04:54.982325 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 27 14:04:54 crc kubenswrapper[4914]: E0127 14:04:54.982785 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgqr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-cwqrf_openstack(07d55233-43ac-42a0-b604-e38f7bafa346): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:04:54 crc kubenswrapper[4914]: E0127 14:04:54.983877 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-cwqrf" podUID="07d55233-43ac-42a0-b604-e38f7bafa346" Jan 27 14:04:55 crc kubenswrapper[4914]: E0127 14:04:55.114126 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7" Jan 27 14:04:55 crc kubenswrapper[4914]: E0127 14:04:55.114470 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c7h597h86h74h5f7h64dhdbh698h695h5f4hcbh54dh5ddhdh6fh5h8h66fh69h555h696h55dh6ch58fh56fh58chcchffh5fch555h579hf9q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghtwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-58d6b45967-kdfwt_openstack(994c43c4-f1e3-44f0-b2d4-9684d2abf02f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.119230 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-d266k"] Jan 27 14:04:55 crc kubenswrapper[4914]: E0127 14:04:55.119636 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7\\\"\"]" pod="openstack/horizon-58d6b45967-kdfwt" podUID="994c43c4-f1e3-44f0-b2d4-9684d2abf02f" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.143530 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-d266k"] Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.211098 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-g8rk8"] Jan 27 14:04:55 crc kubenswrapper[4914]: E0127 14:04:55.211482 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e855aa58-430b-40b4-a5f1-d8abca86976f" containerName="keystone-bootstrap" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.211496 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="e855aa58-430b-40b4-a5f1-d8abca86976f" containerName="keystone-bootstrap" Jan 27 14:04:55 crc kubenswrapper[4914]: E0127 14:04:55.211513 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829a344f-0e48-4d51-aa5d-31dd6b0fa066" containerName="dnsmasq-dns" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.211519 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="829a344f-0e48-4d51-aa5d-31dd6b0fa066" containerName="dnsmasq-dns" Jan 27 14:04:55 crc kubenswrapper[4914]: E0127 14:04:55.211541 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829a344f-0e48-4d51-aa5d-31dd6b0fa066" containerName="init" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.211547 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="829a344f-0e48-4d51-aa5d-31dd6b0fa066" containerName="init" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.211731 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="829a344f-0e48-4d51-aa5d-31dd6b0fa066" containerName="dnsmasq-dns" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.211743 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="e855aa58-430b-40b4-a5f1-d8abca86976f" containerName="keystone-bootstrap" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.212360 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.214894 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.215047 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.215585 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.215690 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-84xg2" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.215820 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.225090 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g8rk8"] Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.239014 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-scripts\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.239087 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txmfs\" (UniqueName: \"kubernetes.io/projected/fc571d78-a30b-48ae-9687-31f5b6826a12-kube-api-access-txmfs\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.239148 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-combined-ca-bundle\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.239167 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-config-data\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.239236 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-credential-keys\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.239258 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-fernet-keys\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.340351 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-combined-ca-bundle\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.340392 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-config-data\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.340455 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-credential-keys\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.340476 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-fernet-keys\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.340501 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-scripts\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.340541 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txmfs\" (UniqueName: \"kubernetes.io/projected/fc571d78-a30b-48ae-9687-31f5b6826a12-kube-api-access-txmfs\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.346356 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-fernet-keys\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.346458 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-scripts\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.348364 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-credential-keys\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.348809 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-config-data\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.348936 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-combined-ca-bundle\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.357629 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txmfs\" (UniqueName: \"kubernetes.io/projected/fc571d78-a30b-48ae-9687-31f5b6826a12-kube-api-access-txmfs\") pod \"keystone-bootstrap-g8rk8\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: I0127 14:04:55.538593 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:04:55 crc kubenswrapper[4914]: E0127 14:04:55.740117 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-cwqrf" podUID="07d55233-43ac-42a0-b604-e38f7bafa346" Jan 27 14:04:56 crc kubenswrapper[4914]: I0127 14:04:56.304208 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="047e2f42-86db-4fb3-ba7d-6927181dc49b" path="/var/lib/kubelet/pods/047e2f42-86db-4fb3-ba7d-6927181dc49b/volumes" Jan 27 14:04:56 crc kubenswrapper[4914]: I0127 14:04:56.304707 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829a344f-0e48-4d51-aa5d-31dd6b0fa066" path="/var/lib/kubelet/pods/829a344f-0e48-4d51-aa5d-31dd6b0fa066/volumes" Jan 27 14:04:56 crc kubenswrapper[4914]: I0127 14:04:56.305442 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e855aa58-430b-40b4-a5f1-d8abca86976f" path="/var/lib/kubelet/pods/e855aa58-430b-40b4-a5f1-d8abca86976f/volumes" Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.854583 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58d6b45967-kdfwt" event={"ID":"994c43c4-f1e3-44f0-b2d4-9684d2abf02f","Type":"ContainerDied","Data":"3384f46a587c2c7e51dc9cd0b3b1d0af8da7838d7e93d592b3023821ed8a4c25"} Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.855052 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3384f46a587c2c7e51dc9cd0b3b1d0af8da7838d7e93d592b3023821ed8a4c25" Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.866528 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59bf577cbf-wxzn9" event={"ID":"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c","Type":"ContainerDied","Data":"7e5c125575924aef91061a76859301cf6dd00f4bd0837b741de1ec6b9f4a1c58"} Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.866562 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e5c125575924aef91061a76859301cf6dd00f4bd0837b741de1ec6b9f4a1c58" Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.911386 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.919853 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.921126 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-logs\") pod \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.921179 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpgnt\" (UniqueName: \"kubernetes.io/projected/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-kube-api-access-hpgnt\") pod \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.921286 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-horizon-secret-key\") pod \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.921737 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-logs" (OuterVolumeSpecName: "logs") pod "7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c" (UID: "7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.922344 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-config-data\") pod \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.922370 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-scripts\") pod \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\" (UID: \"7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c\") " Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.922623 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.922876 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-scripts" (OuterVolumeSpecName: "scripts") pod "7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c" (UID: "7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.922977 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-config-data" (OuterVolumeSpecName: "config-data") pod "7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c" (UID: "7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.935517 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-kube-api-access-hpgnt" (OuterVolumeSpecName: "kube-api-access-hpgnt") pod "7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c" (UID: "7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c"). InnerVolumeSpecName "kube-api-access-hpgnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:05 crc kubenswrapper[4914]: I0127 14:05:05.941068 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c" (UID: "7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.024942 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-config-data\") pod \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.025125 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-scripts\") pod \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.025263 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-logs\") pod \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.025360 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghtwg\" (UniqueName: \"kubernetes.io/projected/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-kube-api-access-ghtwg\") pod \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.025504 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-horizon-secret-key\") pod \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\" (UID: \"994c43c4-f1e3-44f0-b2d4-9684d2abf02f\") " Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.025824 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-scripts" (OuterVolumeSpecName: "scripts") pod "994c43c4-f1e3-44f0-b2d4-9684d2abf02f" (UID: "994c43c4-f1e3-44f0-b2d4-9684d2abf02f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.025985 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-logs" (OuterVolumeSpecName: "logs") pod "994c43c4-f1e3-44f0-b2d4-9684d2abf02f" (UID: "994c43c4-f1e3-44f0-b2d4-9684d2abf02f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.026393 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-config-data" (OuterVolumeSpecName: "config-data") pod "994c43c4-f1e3-44f0-b2d4-9684d2abf02f" (UID: "994c43c4-f1e3-44f0-b2d4-9684d2abf02f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.026756 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpgnt\" (UniqueName: \"kubernetes.io/projected/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-kube-api-access-hpgnt\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.026796 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.026806 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.026817 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.026848 4914 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.026861 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.027109 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.028703 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "994c43c4-f1e3-44f0-b2d4-9684d2abf02f" (UID: "994c43c4-f1e3-44f0-b2d4-9684d2abf02f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.031089 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-kube-api-access-ghtwg" (OuterVolumeSpecName: "kube-api-access-ghtwg") pod "994c43c4-f1e3-44f0-b2d4-9684d2abf02f" (UID: "994c43c4-f1e3-44f0-b2d4-9684d2abf02f"). InnerVolumeSpecName "kube-api-access-ghtwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.129583 4914 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.129650 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghtwg\" (UniqueName: \"kubernetes.io/projected/994c43c4-f1e3-44f0-b2d4-9684d2abf02f-kube-api-access-ghtwg\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.238811 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6bb6c77c5d-pwr6c"] Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.874660 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59bf577cbf-wxzn9" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.874673 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58d6b45967-kdfwt" Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.924484 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58d6b45967-kdfwt"] Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.936096 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-58d6b45967-kdfwt"] Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.986231 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59bf577cbf-wxzn9"] Jan 27 14:05:06 crc kubenswrapper[4914]: I0127 14:05:06.997612 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-59bf577cbf-wxzn9"] Jan 27 14:05:08 crc kubenswrapper[4914]: I0127 14:05:08.318372 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c" path="/var/lib/kubelet/pods/7fdaef3f-d3e7-4be6-92e6-9a2b2539e65c/volumes" Jan 27 14:05:08 crc kubenswrapper[4914]: I0127 14:05:08.319768 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="994c43c4-f1e3-44f0-b2d4-9684d2abf02f" path="/var/lib/kubelet/pods/994c43c4-f1e3-44f0-b2d4-9684d2abf02f/volumes" Jan 27 14:05:10 crc kubenswrapper[4914]: I0127 14:05:10.399782 4914 scope.go:117] "RemoveContainer" containerID="fe3db557a2df98c46ff1ad52453d44b4cfe28ea2a8c85cff9ef04eec00928f0f" Jan 27 14:05:10 crc kubenswrapper[4914]: W0127 14:05:10.421991 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7209cbb_e572_463b_bb43_9805cd58ea57.slice/crio-b787102901be816757c7d84a776d062c112a65fa92c9d6ceba8b2c2599284f77 WatchSource:0}: Error finding container b787102901be816757c7d84a776d062c112a65fa92c9d6ceba8b2c2599284f77: Status 404 returned error can't find the container with id b787102901be816757c7d84a776d062c112a65fa92c9d6ceba8b2c2599284f77 Jan 27 14:05:10 crc kubenswrapper[4914]: E0127 14:05:10.631662 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 27 14:05:10 crc kubenswrapper[4914]: E0127 14:05:10.632340 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjpzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-8d56k_openstack(131bae56-5108-4750-8056-68133598a109): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:05:10 crc kubenswrapper[4914]: E0127 14:05:10.633749 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-8d56k" podUID="131bae56-5108-4750-8056-68133598a109" Jan 27 14:05:10 crc kubenswrapper[4914]: E0127 14:05:10.874895 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:df14f6de785b8aefc38ceb5b47088405224cfa914977c9ab811514cc77b08a67" Jan 27 14:05:10 crc kubenswrapper[4914]: E0127 14:05:10.875128 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:df14f6de785b8aefc38ceb5b47088405224cfa914977c9ab811514cc77b08a67,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b9h5dh9ch76h5c5h7ch5bfhfh55fh9h656h565h6bh596h89h656h8fh5cdh667hb8h6dhd6h8fh5f4h54bh557hd9h668hf5h565h68ch7bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrldw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ec53709e-df2b-4fc9-b9ac-6e144a262455): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:05:10 crc kubenswrapper[4914]: I0127 14:05:10.917971 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bb6c77c5d-pwr6c" event={"ID":"d7209cbb-e572-463b-bb43-9805cd58ea57","Type":"ContainerStarted","Data":"b787102901be816757c7d84a776d062c112a65fa92c9d6ceba8b2c2599284f77"} Jan 27 14:05:10 crc kubenswrapper[4914]: E0127 14:05:10.930000 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-8d56k" podUID="131bae56-5108-4750-8056-68133598a109" Jan 27 14:05:11 crc kubenswrapper[4914]: I0127 14:05:11.024283 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75b4645c86-9r9q2"] Jan 27 14:05:11 crc kubenswrapper[4914]: I0127 14:05:11.036170 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:05:11 crc kubenswrapper[4914]: W0127 14:05:11.221497 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5d7b3a4_0e06_481c_8231_9aa12929da2c.slice/crio-196fb2216d2043792851bf2001baaaf15203e3f46300cca5c5ff76f466049bcf WatchSource:0}: Error finding container 196fb2216d2043792851bf2001baaaf15203e3f46300cca5c5ff76f466049bcf: Status 404 returned error can't find the container with id 196fb2216d2043792851bf2001baaaf15203e3f46300cca5c5ff76f466049bcf Jan 27 14:05:11 crc kubenswrapper[4914]: W0127 14:05:11.223481 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dd59938_4cf8_4632_8b1c_237cf981fd5f.slice/crio-5e6ef53ceac01b8275f3bd37ee4e243e76f0850f34cdc7b8d4a9b792d5703931 WatchSource:0}: Error finding container 5e6ef53ceac01b8275f3bd37ee4e243e76f0850f34cdc7b8d4a9b792d5703931: Status 404 returned error can't find the container with id 5e6ef53ceac01b8275f3bd37ee4e243e76f0850f34cdc7b8d4a9b792d5703931 Jan 27 14:05:11 crc kubenswrapper[4914]: I0127 14:05:11.363298 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g8rk8"] Jan 27 14:05:11 crc kubenswrapper[4914]: I0127 14:05:11.396129 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 14:05:11 crc kubenswrapper[4914]: I0127 14:05:11.932740 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cwqrf" event={"ID":"07d55233-43ac-42a0-b604-e38f7bafa346","Type":"ContainerStarted","Data":"efd560b80c2ce3d88f9e0184068e891a8bcf908d97452d5763ddbf26da8c3fd2"} Jan 27 14:05:11 crc kubenswrapper[4914]: I0127 14:05:11.936124 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g8rk8" event={"ID":"fc571d78-a30b-48ae-9687-31f5b6826a12","Type":"ContainerStarted","Data":"b306d76bfcf734071304e1e40ca59c0382f8e48e49c46e4ac748b0548454c979"} Jan 27 14:05:11 crc kubenswrapper[4914]: I0127 14:05:11.936171 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g8rk8" event={"ID":"fc571d78-a30b-48ae-9687-31f5b6826a12","Type":"ContainerStarted","Data":"0a6d50b90ff089c4b6081dbef57f361a908167a3727f4a8e7c5423d5ea0f70da"} Jan 27 14:05:11 crc kubenswrapper[4914]: I0127 14:05:11.939959 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d5d70aa-dabc-4de3-859e-01529e77123b","Type":"ContainerStarted","Data":"329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c"} Jan 27 14:05:11 crc kubenswrapper[4914]: I0127 14:05:11.941153 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75b4645c86-9r9q2" event={"ID":"1dd59938-4cf8-4632-8b1c-237cf981fd5f","Type":"ContainerStarted","Data":"5e6ef53ceac01b8275f3bd37ee4e243e76f0850f34cdc7b8d4a9b792d5703931"} Jan 27 14:05:11 crc kubenswrapper[4914]: I0127 14:05:11.942472 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a5d7b3a4-0e06-481c-8231-9aa12929da2c","Type":"ContainerStarted","Data":"196fb2216d2043792851bf2001baaaf15203e3f46300cca5c5ff76f466049bcf"} Jan 27 14:05:11 crc kubenswrapper[4914]: I0127 14:05:11.967843 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-cwqrf" podStartSLOduration=3.166217058 podStartE2EDuration="40.96780856s" podCreationTimestamp="2026-01-27 14:04:31 +0000 UTC" firstStartedPulling="2026-01-27 14:04:33.573721911 +0000 UTC m=+1231.886071986" lastFinishedPulling="2026-01-27 14:05:11.375313403 +0000 UTC m=+1269.687663488" observedRunningTime="2026-01-27 14:05:11.965188988 +0000 UTC m=+1270.277539163" watchObservedRunningTime="2026-01-27 14:05:11.96780856 +0000 UTC m=+1270.280158645" Jan 27 14:05:12 crc kubenswrapper[4914]: I0127 14:05:12.000569 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-g8rk8" podStartSLOduration=17.000545768 podStartE2EDuration="17.000545768s" podCreationTimestamp="2026-01-27 14:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:05:11.990016899 +0000 UTC m=+1270.302367004" watchObservedRunningTime="2026-01-27 14:05:12.000545768 +0000 UTC m=+1270.312895863" Jan 27 14:05:12 crc kubenswrapper[4914]: I0127 14:05:12.958043 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bb6c77c5d-pwr6c" event={"ID":"d7209cbb-e572-463b-bb43-9805cd58ea57","Type":"ContainerStarted","Data":"f7c26a58df2b46a44b77eb321fc53a5957c95fd5a166036b07c8ee0d548d2a54"} Jan 27 14:05:12 crc kubenswrapper[4914]: I0127 14:05:12.958342 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bb6c77c5d-pwr6c" event={"ID":"d7209cbb-e572-463b-bb43-9805cd58ea57","Type":"ContainerStarted","Data":"de435b50265a1c44ec1dcab82b70401c6fbaf7cc8a61a324eaa47c7e2f82e34c"} Jan 27 14:05:12 crc kubenswrapper[4914]: I0127 14:05:12.961464 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75b4645c86-9r9q2" event={"ID":"1dd59938-4cf8-4632-8b1c-237cf981fd5f","Type":"ContainerStarted","Data":"9b6322b0157303af4f92615415e427acec270b33985cd6111ea50a4b30ddc85c"} Jan 27 14:05:12 crc kubenswrapper[4914]: I0127 14:05:12.961509 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75b4645c86-9r9q2" event={"ID":"1dd59938-4cf8-4632-8b1c-237cf981fd5f","Type":"ContainerStarted","Data":"ad832d0feb989590a5314d3754c43edf28a65278eb9fabcffc6d095749855113"} Jan 27 14:05:12 crc kubenswrapper[4914]: I0127 14:05:12.969395 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a5d7b3a4-0e06-481c-8231-9aa12929da2c","Type":"ContainerStarted","Data":"d240a6a34f4a16c7365ab05c8934d690568cd3ddffe2990eb019a8e48b29ae39"} Jan 27 14:05:12 crc kubenswrapper[4914]: I0127 14:05:12.970820 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2h5wx" event={"ID":"505474ad-b983-4001-b8b6-f55b1d077e08","Type":"ContainerStarted","Data":"e7a8ae2e99521855c053a5fa9a098d033e89dffc625ece54b09edfc02ba894b3"} Jan 27 14:05:12 crc kubenswrapper[4914]: I0127 14:05:12.990089 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d5d70aa-dabc-4de3-859e-01529e77123b","Type":"ContainerStarted","Data":"8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500"} Jan 27 14:05:12 crc kubenswrapper[4914]: I0127 14:05:12.990193 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4d5d70aa-dabc-4de3-859e-01529e77123b" containerName="glance-log" containerID="cri-o://329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c" gracePeriod=30 Jan 27 14:05:12 crc kubenswrapper[4914]: I0127 14:05:12.990328 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4d5d70aa-dabc-4de3-859e-01529e77123b" containerName="glance-httpd" containerID="cri-o://8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500" gracePeriod=30 Jan 27 14:05:12 crc kubenswrapper[4914]: I0127 14:05:12.993316 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6bb6c77c5d-pwr6c" podStartSLOduration=31.687233979 podStartE2EDuration="32.993296904s" podCreationTimestamp="2026-01-27 14:04:40 +0000 UTC" firstStartedPulling="2026-01-27 14:05:10.438440946 +0000 UTC m=+1268.750791031" lastFinishedPulling="2026-01-27 14:05:11.744503871 +0000 UTC m=+1270.056853956" observedRunningTime="2026-01-27 14:05:12.984467912 +0000 UTC m=+1271.296818007" watchObservedRunningTime="2026-01-27 14:05:12.993296904 +0000 UTC m=+1271.305646989" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.001472 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2h5wx" podStartSLOduration=4.299985869 podStartE2EDuration="42.001447428s" podCreationTimestamp="2026-01-27 14:04:31 +0000 UTC" firstStartedPulling="2026-01-27 14:04:33.671654163 +0000 UTC m=+1231.984004248" lastFinishedPulling="2026-01-27 14:05:11.373115722 +0000 UTC m=+1269.685465807" observedRunningTime="2026-01-27 14:05:12.997997503 +0000 UTC m=+1271.310347608" watchObservedRunningTime="2026-01-27 14:05:13.001447428 +0000 UTC m=+1271.313797513" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.032887 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-75b4645c86-9r9q2" podStartSLOduration=31.864464408 podStartE2EDuration="33.032868039s" podCreationTimestamp="2026-01-27 14:04:40 +0000 UTC" firstStartedPulling="2026-01-27 14:05:11.296368239 +0000 UTC m=+1269.608718324" lastFinishedPulling="2026-01-27 14:05:12.46477187 +0000 UTC m=+1270.777121955" observedRunningTime="2026-01-27 14:05:13.021808457 +0000 UTC m=+1271.334158542" watchObservedRunningTime="2026-01-27 14:05:13.032868039 +0000 UTC m=+1271.345218114" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.057150 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=36.057133874 podStartE2EDuration="36.057133874s" podCreationTimestamp="2026-01-27 14:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:05:13.045570427 +0000 UTC m=+1271.357920512" watchObservedRunningTime="2026-01-27 14:05:13.057133874 +0000 UTC m=+1271.369483959" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.676066 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.751772 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-internal-tls-certs\") pod \"4d5d70aa-dabc-4de3-859e-01529e77123b\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.751879 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"4d5d70aa-dabc-4de3-859e-01529e77123b\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.752072 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-scripts\") pod \"4d5d70aa-dabc-4de3-859e-01529e77123b\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.752186 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d5d70aa-dabc-4de3-859e-01529e77123b-logs\") pod \"4d5d70aa-dabc-4de3-859e-01529e77123b\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.753089 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xvtg\" (UniqueName: \"kubernetes.io/projected/4d5d70aa-dabc-4de3-859e-01529e77123b-kube-api-access-6xvtg\") pod \"4d5d70aa-dabc-4de3-859e-01529e77123b\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.753240 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-config-data\") pod \"4d5d70aa-dabc-4de3-859e-01529e77123b\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.753370 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d5d70aa-dabc-4de3-859e-01529e77123b-httpd-run\") pod \"4d5d70aa-dabc-4de3-859e-01529e77123b\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.753407 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-combined-ca-bundle\") pod \"4d5d70aa-dabc-4de3-859e-01529e77123b\" (UID: \"4d5d70aa-dabc-4de3-859e-01529e77123b\") " Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.757915 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d5d70aa-dabc-4de3-859e-01529e77123b-kube-api-access-6xvtg" (OuterVolumeSpecName: "kube-api-access-6xvtg") pod "4d5d70aa-dabc-4de3-859e-01529e77123b" (UID: "4d5d70aa-dabc-4de3-859e-01529e77123b"). InnerVolumeSpecName "kube-api-access-6xvtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.758008 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "4d5d70aa-dabc-4de3-859e-01529e77123b" (UID: "4d5d70aa-dabc-4de3-859e-01529e77123b"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.758128 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-scripts" (OuterVolumeSpecName: "scripts") pod "4d5d70aa-dabc-4de3-859e-01529e77123b" (UID: "4d5d70aa-dabc-4de3-859e-01529e77123b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.758497 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d5d70aa-dabc-4de3-859e-01529e77123b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4d5d70aa-dabc-4de3-859e-01529e77123b" (UID: "4d5d70aa-dabc-4de3-859e-01529e77123b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.759689 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d5d70aa-dabc-4de3-859e-01529e77123b-logs" (OuterVolumeSpecName: "logs") pod "4d5d70aa-dabc-4de3-859e-01529e77123b" (UID: "4d5d70aa-dabc-4de3-859e-01529e77123b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.777277 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d5d70aa-dabc-4de3-859e-01529e77123b" (UID: "4d5d70aa-dabc-4de3-859e-01529e77123b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.800328 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4d5d70aa-dabc-4de3-859e-01529e77123b" (UID: "4d5d70aa-dabc-4de3-859e-01529e77123b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.809426 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-config-data" (OuterVolumeSpecName: "config-data") pod "4d5d70aa-dabc-4de3-859e-01529e77123b" (UID: "4d5d70aa-dabc-4de3-859e-01529e77123b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.856394 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xvtg\" (UniqueName: \"kubernetes.io/projected/4d5d70aa-dabc-4de3-859e-01529e77123b-kube-api-access-6xvtg\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.856426 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.856438 4914 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d5d70aa-dabc-4de3-859e-01529e77123b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.856447 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.856502 4914 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.856536 4914 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.856545 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d5d70aa-dabc-4de3-859e-01529e77123b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.856556 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d5d70aa-dabc-4de3-859e-01529e77123b-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.872716 4914 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 27 14:05:13 crc kubenswrapper[4914]: I0127 14:05:13.958280 4914 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.003066 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a5d7b3a4-0e06-481c-8231-9aa12929da2c" containerName="glance-log" containerID="cri-o://d240a6a34f4a16c7365ab05c8934d690568cd3ddffe2990eb019a8e48b29ae39" gracePeriod=30 Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.003322 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a5d7b3a4-0e06-481c-8231-9aa12929da2c","Type":"ContainerStarted","Data":"c99eb9e26fdb7c0e72816ace015d472376e54c1450e98e1f86ca75944485ab91"} Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.005506 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a5d7b3a4-0e06-481c-8231-9aa12929da2c" containerName="glance-httpd" containerID="cri-o://c99eb9e26fdb7c0e72816ace015d472376e54c1450e98e1f86ca75944485ab91" gracePeriod=30 Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.013704 4914 generic.go:334] "Generic (PLEG): container finished" podID="4d5d70aa-dabc-4de3-859e-01529e77123b" containerID="8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500" exitCode=0 Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.013786 4914 generic.go:334] "Generic (PLEG): container finished" podID="4d5d70aa-dabc-4de3-859e-01529e77123b" containerID="329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c" exitCode=143 Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.013766 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d5d70aa-dabc-4de3-859e-01529e77123b","Type":"ContainerDied","Data":"8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500"} Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.014195 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d5d70aa-dabc-4de3-859e-01529e77123b","Type":"ContainerDied","Data":"329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c"} Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.014235 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d5d70aa-dabc-4de3-859e-01529e77123b","Type":"ContainerDied","Data":"ce009e60e5fd935b1c634e183cec512c4f09b3b727f66223e05742ce8ba802dd"} Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.014258 4914 scope.go:117] "RemoveContainer" containerID="8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.013765 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.050336 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=36.050317753 podStartE2EDuration="36.050317753s" podCreationTimestamp="2026-01-27 14:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:05:14.036349721 +0000 UTC m=+1272.348699806" watchObservedRunningTime="2026-01-27 14:05:14.050317753 +0000 UTC m=+1272.362667838" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.069893 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.080718 4914 scope.go:117] "RemoveContainer" containerID="329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.081178 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.096808 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:05:14 crc kubenswrapper[4914]: E0127 14:05:14.097259 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5d70aa-dabc-4de3-859e-01529e77123b" containerName="glance-httpd" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.097282 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5d70aa-dabc-4de3-859e-01529e77123b" containerName="glance-httpd" Jan 27 14:05:14 crc kubenswrapper[4914]: E0127 14:05:14.097305 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5d70aa-dabc-4de3-859e-01529e77123b" containerName="glance-log" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.097314 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5d70aa-dabc-4de3-859e-01529e77123b" containerName="glance-log" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.097509 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d5d70aa-dabc-4de3-859e-01529e77123b" containerName="glance-httpd" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.097538 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d5d70aa-dabc-4de3-859e-01529e77123b" containerName="glance-log" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.101868 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.106301 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.107334 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.135206 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.164049 4914 scope.go:117] "RemoveContainer" containerID="8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500" Jan 27 14:05:14 crc kubenswrapper[4914]: E0127 14:05:14.165071 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500\": container with ID starting with 8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500 not found: ID does not exist" containerID="8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.165141 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500"} err="failed to get container status \"8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500\": rpc error: code = NotFound desc = could not find container \"8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500\": container with ID starting with 8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500 not found: ID does not exist" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.165164 4914 scope.go:117] "RemoveContainer" containerID="329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c" Jan 27 14:05:14 crc kubenswrapper[4914]: E0127 14:05:14.165580 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c\": container with ID starting with 329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c not found: ID does not exist" containerID="329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.165628 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c"} err="failed to get container status \"329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c\": rpc error: code = NotFound desc = could not find container \"329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c\": container with ID starting with 329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c not found: ID does not exist" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.165648 4914 scope.go:117] "RemoveContainer" containerID="8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.165650 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.165712 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.165770 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-logs\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.165952 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.166018 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500"} err="failed to get container status \"8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500\": rpc error: code = NotFound desc = could not find container \"8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500\": container with ID starting with 8c7bfca26739a4015c80dc7ada4aac31c8ac81cba99b8f74161b69d3b69d3500 not found: ID does not exist" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.166045 4914 scope.go:117] "RemoveContainer" containerID="329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.166140 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.166224 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp7pw\" (UniqueName: \"kubernetes.io/projected/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-kube-api-access-cp7pw\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.166351 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.167676 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.169123 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c"} err="failed to get container status \"329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c\": rpc error: code = NotFound desc = could not find container \"329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c\": container with ID starting with 329afe074edd371c08624a14e99e8d86dcd07e2017f516497abdbde070d82b4c not found: ID does not exist" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.270124 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.271007 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.271039 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-logs\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.271098 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.271155 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.271191 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp7pw\" (UniqueName: \"kubernetes.io/projected/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-kube-api-access-cp7pw\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.271226 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.271252 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.275050 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.275457 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.275705 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.276159 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-logs\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.278413 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.281412 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.289541 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.296694 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp7pw\" (UniqueName: \"kubernetes.io/projected/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-kube-api-access-cp7pw\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.311874 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d5d70aa-dabc-4de3-859e-01529e77123b" path="/var/lib/kubelet/pods/4d5d70aa-dabc-4de3-859e-01529e77123b/volumes" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.315499 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:05:14 crc kubenswrapper[4914]: I0127 14:05:14.425556 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:05:15 crc kubenswrapper[4914]: I0127 14:05:15.025174 4914 generic.go:334] "Generic (PLEG): container finished" podID="fc571d78-a30b-48ae-9687-31f5b6826a12" containerID="b306d76bfcf734071304e1e40ca59c0382f8e48e49c46e4ac748b0548454c979" exitCode=0 Jan 27 14:05:15 crc kubenswrapper[4914]: I0127 14:05:15.025358 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g8rk8" event={"ID":"fc571d78-a30b-48ae-9687-31f5b6826a12","Type":"ContainerDied","Data":"b306d76bfcf734071304e1e40ca59c0382f8e48e49c46e4ac748b0548454c979"} Jan 27 14:05:15 crc kubenswrapper[4914]: I0127 14:05:15.037059 4914 generic.go:334] "Generic (PLEG): container finished" podID="a5d7b3a4-0e06-481c-8231-9aa12929da2c" containerID="c99eb9e26fdb7c0e72816ace015d472376e54c1450e98e1f86ca75944485ab91" exitCode=0 Jan 27 14:05:15 crc kubenswrapper[4914]: I0127 14:05:15.037086 4914 generic.go:334] "Generic (PLEG): container finished" podID="a5d7b3a4-0e06-481c-8231-9aa12929da2c" containerID="d240a6a34f4a16c7365ab05c8934d690568cd3ddffe2990eb019a8e48b29ae39" exitCode=143 Jan 27 14:05:15 crc kubenswrapper[4914]: I0127 14:05:15.037124 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a5d7b3a4-0e06-481c-8231-9aa12929da2c","Type":"ContainerDied","Data":"c99eb9e26fdb7c0e72816ace015d472376e54c1450e98e1f86ca75944485ab91"} Jan 27 14:05:15 crc kubenswrapper[4914]: I0127 14:05:15.037148 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a5d7b3a4-0e06-481c-8231-9aa12929da2c","Type":"ContainerDied","Data":"d240a6a34f4a16c7365ab05c8934d690568cd3ddffe2990eb019a8e48b29ae39"} Jan 27 14:05:15 crc kubenswrapper[4914]: I0127 14:05:15.038575 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:05:16 crc kubenswrapper[4914]: I0127 14:05:16.084186 4914 generic.go:334] "Generic (PLEG): container finished" podID="505474ad-b983-4001-b8b6-f55b1d077e08" containerID="e7a8ae2e99521855c053a5fa9a098d033e89dffc625ece54b09edfc02ba894b3" exitCode=0 Jan 27 14:05:16 crc kubenswrapper[4914]: I0127 14:05:16.084442 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2h5wx" event={"ID":"505474ad-b983-4001-b8b6-f55b1d077e08","Type":"ContainerDied","Data":"e7a8ae2e99521855c053a5fa9a098d033e89dffc625ece54b09edfc02ba894b3"} Jan 27 14:05:17 crc kubenswrapper[4914]: I0127 14:05:17.095688 4914 generic.go:334] "Generic (PLEG): container finished" podID="07d55233-43ac-42a0-b604-e38f7bafa346" containerID="efd560b80c2ce3d88f9e0184068e891a8bcf908d97452d5763ddbf26da8c3fd2" exitCode=0 Jan 27 14:05:17 crc kubenswrapper[4914]: I0127 14:05:17.095800 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cwqrf" event={"ID":"07d55233-43ac-42a0-b604-e38f7bafa346","Type":"ContainerDied","Data":"efd560b80c2ce3d88f9e0184068e891a8bcf908d97452d5763ddbf26da8c3fd2"} Jan 27 14:05:19 crc kubenswrapper[4914]: W0127 14:05:19.518397 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podece67619_8ef7_4c3f_ba5a_36fcc1f05fe4.slice/crio-811ec6b00a391f773d7175623c9934fd9641d75cb808a091fc4cf1d782c0f148 WatchSource:0}: Error finding container 811ec6b00a391f773d7175623c9934fd9641d75cb808a091fc4cf1d782c0f148: Status 404 returned error can't find the container with id 811ec6b00a391f773d7175623c9934fd9641d75cb808a091fc4cf1d782c0f148 Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.722669 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cwqrf" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.754631 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2h5wx" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.759255 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.779851 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.794408 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/07d55233-43ac-42a0-b604-e38f7bafa346-db-sync-config-data\") pod \"07d55233-43ac-42a0-b604-e38f7bafa346\" (UID: \"07d55233-43ac-42a0-b604-e38f7bafa346\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.794505 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgqr2\" (UniqueName: \"kubernetes.io/projected/07d55233-43ac-42a0-b604-e38f7bafa346-kube-api-access-vgqr2\") pod \"07d55233-43ac-42a0-b604-e38f7bafa346\" (UID: \"07d55233-43ac-42a0-b604-e38f7bafa346\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.794590 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d55233-43ac-42a0-b604-e38f7bafa346-combined-ca-bundle\") pod \"07d55233-43ac-42a0-b604-e38f7bafa346\" (UID: \"07d55233-43ac-42a0-b604-e38f7bafa346\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.815406 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d55233-43ac-42a0-b604-e38f7bafa346-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "07d55233-43ac-42a0-b604-e38f7bafa346" (UID: "07d55233-43ac-42a0-b604-e38f7bafa346"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.826336 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07d55233-43ac-42a0-b604-e38f7bafa346-kube-api-access-vgqr2" (OuterVolumeSpecName: "kube-api-access-vgqr2") pod "07d55233-43ac-42a0-b604-e38f7bafa346" (UID: "07d55233-43ac-42a0-b604-e38f7bafa346"). InnerVolumeSpecName "kube-api-access-vgqr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.830937 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d55233-43ac-42a0-b604-e38f7bafa346-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07d55233-43ac-42a0-b604-e38f7bafa346" (UID: "07d55233-43ac-42a0-b604-e38f7bafa346"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.897774 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-combined-ca-bundle\") pod \"505474ad-b983-4001-b8b6-f55b1d077e08\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.897851 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-scripts\") pod \"fc571d78-a30b-48ae-9687-31f5b6826a12\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.897886 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-scripts\") pod \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.897921 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5d7b3a4-0e06-481c-8231-9aa12929da2c-logs\") pod \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.897961 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-credential-keys\") pod \"fc571d78-a30b-48ae-9687-31f5b6826a12\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.898002 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505474ad-b983-4001-b8b6-f55b1d077e08-logs\") pod \"505474ad-b983-4001-b8b6-f55b1d077e08\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.898026 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-combined-ca-bundle\") pod \"fc571d78-a30b-48ae-9687-31f5b6826a12\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.898054 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txmfs\" (UniqueName: \"kubernetes.io/projected/fc571d78-a30b-48ae-9687-31f5b6826a12-kube-api-access-txmfs\") pod \"fc571d78-a30b-48ae-9687-31f5b6826a12\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.898075 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-public-tls-certs\") pod \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.898097 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-scripts\") pod \"505474ad-b983-4001-b8b6-f55b1d077e08\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.898122 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-fernet-keys\") pod \"fc571d78-a30b-48ae-9687-31f5b6826a12\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.898145 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-config-data\") pod \"505474ad-b983-4001-b8b6-f55b1d077e08\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.898172 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk7xx\" (UniqueName: \"kubernetes.io/projected/505474ad-b983-4001-b8b6-f55b1d077e08-kube-api-access-qk7xx\") pod \"505474ad-b983-4001-b8b6-f55b1d077e08\" (UID: \"505474ad-b983-4001-b8b6-f55b1d077e08\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.898223 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-config-data\") pod \"fc571d78-a30b-48ae-9687-31f5b6826a12\" (UID: \"fc571d78-a30b-48ae-9687-31f5b6826a12\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.898262 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-combined-ca-bundle\") pod \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.898283 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5d7b3a4-0e06-481c-8231-9aa12929da2c-httpd-run\") pod \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.898308 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.898361 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gc2l\" (UniqueName: \"kubernetes.io/projected/a5d7b3a4-0e06-481c-8231-9aa12929da2c-kube-api-access-5gc2l\") pod \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.898383 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-config-data\") pod \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\" (UID: \"a5d7b3a4-0e06-481c-8231-9aa12929da2c\") " Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.898527 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5d7b3a4-0e06-481c-8231-9aa12929da2c-logs" (OuterVolumeSpecName: "logs") pod "a5d7b3a4-0e06-481c-8231-9aa12929da2c" (UID: "a5d7b3a4-0e06-481c-8231-9aa12929da2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.899055 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/505474ad-b983-4001-b8b6-f55b1d077e08-logs" (OuterVolumeSpecName: "logs") pod "505474ad-b983-4001-b8b6-f55b1d077e08" (UID: "505474ad-b983-4001-b8b6-f55b1d077e08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.899076 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgqr2\" (UniqueName: \"kubernetes.io/projected/07d55233-43ac-42a0-b604-e38f7bafa346-kube-api-access-vgqr2\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.899137 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d55233-43ac-42a0-b604-e38f7bafa346-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.899154 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5d7b3a4-0e06-481c-8231-9aa12929da2c-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.899168 4914 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/07d55233-43ac-42a0-b604-e38f7bafa346-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.902436 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-scripts" (OuterVolumeSpecName: "scripts") pod "a5d7b3a4-0e06-481c-8231-9aa12929da2c" (UID: "a5d7b3a4-0e06-481c-8231-9aa12929da2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.902524 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-scripts" (OuterVolumeSpecName: "scripts") pod "fc571d78-a30b-48ae-9687-31f5b6826a12" (UID: "fc571d78-a30b-48ae-9687-31f5b6826a12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.902806 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5d7b3a4-0e06-481c-8231-9aa12929da2c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a5d7b3a4-0e06-481c-8231-9aa12929da2c" (UID: "a5d7b3a4-0e06-481c-8231-9aa12929da2c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.903735 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-scripts" (OuterVolumeSpecName: "scripts") pod "505474ad-b983-4001-b8b6-f55b1d077e08" (UID: "505474ad-b983-4001-b8b6-f55b1d077e08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.904370 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d7b3a4-0e06-481c-8231-9aa12929da2c-kube-api-access-5gc2l" (OuterVolumeSpecName: "kube-api-access-5gc2l") pod "a5d7b3a4-0e06-481c-8231-9aa12929da2c" (UID: "a5d7b3a4-0e06-481c-8231-9aa12929da2c"). InnerVolumeSpecName "kube-api-access-5gc2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.906843 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fc571d78-a30b-48ae-9687-31f5b6826a12" (UID: "fc571d78-a30b-48ae-9687-31f5b6826a12"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.907337 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc571d78-a30b-48ae-9687-31f5b6826a12-kube-api-access-txmfs" (OuterVolumeSpecName: "kube-api-access-txmfs") pod "fc571d78-a30b-48ae-9687-31f5b6826a12" (UID: "fc571d78-a30b-48ae-9687-31f5b6826a12"). InnerVolumeSpecName "kube-api-access-txmfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.909267 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fc571d78-a30b-48ae-9687-31f5b6826a12" (UID: "fc571d78-a30b-48ae-9687-31f5b6826a12"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.911260 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/505474ad-b983-4001-b8b6-f55b1d077e08-kube-api-access-qk7xx" (OuterVolumeSpecName: "kube-api-access-qk7xx") pod "505474ad-b983-4001-b8b6-f55b1d077e08" (UID: "505474ad-b983-4001-b8b6-f55b1d077e08"). InnerVolumeSpecName "kube-api-access-qk7xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.915219 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "a5d7b3a4-0e06-481c-8231-9aa12929da2c" (UID: "a5d7b3a4-0e06-481c-8231-9aa12929da2c"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.932281 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "505474ad-b983-4001-b8b6-f55b1d077e08" (UID: "505474ad-b983-4001-b8b6-f55b1d077e08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.934185 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-config-data" (OuterVolumeSpecName: "config-data") pod "fc571d78-a30b-48ae-9687-31f5b6826a12" (UID: "fc571d78-a30b-48ae-9687-31f5b6826a12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.941009 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5d7b3a4-0e06-481c-8231-9aa12929da2c" (UID: "a5d7b3a4-0e06-481c-8231-9aa12929da2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.944266 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc571d78-a30b-48ae-9687-31f5b6826a12" (UID: "fc571d78-a30b-48ae-9687-31f5b6826a12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.944488 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-config-data" (OuterVolumeSpecName: "config-data") pod "505474ad-b983-4001-b8b6-f55b1d077e08" (UID: "505474ad-b983-4001-b8b6-f55b1d077e08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.958796 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a5d7b3a4-0e06-481c-8231-9aa12929da2c" (UID: "a5d7b3a4-0e06-481c-8231-9aa12929da2c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:19 crc kubenswrapper[4914]: I0127 14:05:19.959228 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-config-data" (OuterVolumeSpecName: "config-data") pod "a5d7b3a4-0e06-481c-8231-9aa12929da2c" (UID: "a5d7b3a4-0e06-481c-8231-9aa12929da2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.000442 4914 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.000640 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.000696 4914 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.000747 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.000798 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk7xx\" (UniqueName: \"kubernetes.io/projected/505474ad-b983-4001-b8b6-f55b1d077e08-kube-api-access-qk7xx\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.000886 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.000976 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.001037 4914 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5d7b3a4-0e06-481c-8231-9aa12929da2c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.001115 4914 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.001171 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gc2l\" (UniqueName: \"kubernetes.io/projected/a5d7b3a4-0e06-481c-8231-9aa12929da2c-kube-api-access-5gc2l\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.001223 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.001281 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505474ad-b983-4001-b8b6-f55b1d077e08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.001332 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.001381 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5d7b3a4-0e06-481c-8231-9aa12929da2c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.001431 4914 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.001481 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505474ad-b983-4001-b8b6-f55b1d077e08-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.001530 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc571d78-a30b-48ae-9687-31f5b6826a12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.001593 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txmfs\" (UniqueName: \"kubernetes.io/projected/fc571d78-a30b-48ae-9687-31f5b6826a12-kube-api-access-txmfs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.020950 4914 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.103240 4914 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.123414 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a5d7b3a4-0e06-481c-8231-9aa12929da2c","Type":"ContainerDied","Data":"196fb2216d2043792851bf2001baaaf15203e3f46300cca5c5ff76f466049bcf"} Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.123463 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.123468 4914 scope.go:117] "RemoveContainer" containerID="c99eb9e26fdb7c0e72816ace015d472376e54c1450e98e1f86ca75944485ab91" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.129986 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cwqrf" event={"ID":"07d55233-43ac-42a0-b604-e38f7bafa346","Type":"ContainerDied","Data":"9a3bae91264cb032f1929f663a7cbf9c566396d93ff8948d2874ee44dd9fd39f"} Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.130024 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a3bae91264cb032f1929f663a7cbf9c566396d93ff8948d2874ee44dd9fd39f" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.130064 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cwqrf" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.131647 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec53709e-df2b-4fc9-b9ac-6e144a262455","Type":"ContainerStarted","Data":"3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547"} Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.133034 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2h5wx" event={"ID":"505474ad-b983-4001-b8b6-f55b1d077e08","Type":"ContainerDied","Data":"7ddeadb13df51973c038fad3e1c0dff60ea0d5ae7bec3a55a214d297abfbec23"} Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.133066 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ddeadb13df51973c038fad3e1c0dff60ea0d5ae7bec3a55a214d297abfbec23" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.133133 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2h5wx" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.136574 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g8rk8" event={"ID":"fc571d78-a30b-48ae-9687-31f5b6826a12","Type":"ContainerDied","Data":"0a6d50b90ff089c4b6081dbef57f361a908167a3727f4a8e7c5423d5ea0f70da"} Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.136617 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a6d50b90ff089c4b6081dbef57f361a908167a3727f4a8e7c5423d5ea0f70da" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.136649 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g8rk8" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.139800 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4","Type":"ContainerStarted","Data":"81d80032b50d40fab5a80b2fe3567cafea96440532d38fe3006ed4cc175ff2c3"} Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.139859 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4","Type":"ContainerStarted","Data":"811ec6b00a391f773d7175623c9934fd9641d75cb808a091fc4cf1d782c0f148"} Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.156311 4914 scope.go:117] "RemoveContainer" containerID="d240a6a34f4a16c7365ab05c8934d690568cd3ddffe2990eb019a8e48b29ae39" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.186576 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.193788 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.245905 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:05:20 crc kubenswrapper[4914]: E0127 14:05:20.246301 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc571d78-a30b-48ae-9687-31f5b6826a12" containerName="keystone-bootstrap" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.246318 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc571d78-a30b-48ae-9687-31f5b6826a12" containerName="keystone-bootstrap" Jan 27 14:05:20 crc kubenswrapper[4914]: E0127 14:05:20.246328 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d7b3a4-0e06-481c-8231-9aa12929da2c" containerName="glance-log" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.246335 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d7b3a4-0e06-481c-8231-9aa12929da2c" containerName="glance-log" Jan 27 14:05:20 crc kubenswrapper[4914]: E0127 14:05:20.246341 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d7b3a4-0e06-481c-8231-9aa12929da2c" containerName="glance-httpd" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.246348 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d7b3a4-0e06-481c-8231-9aa12929da2c" containerName="glance-httpd" Jan 27 14:05:20 crc kubenswrapper[4914]: E0127 14:05:20.246369 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d55233-43ac-42a0-b604-e38f7bafa346" containerName="barbican-db-sync" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.246374 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d55233-43ac-42a0-b604-e38f7bafa346" containerName="barbican-db-sync" Jan 27 14:05:20 crc kubenswrapper[4914]: E0127 14:05:20.246396 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505474ad-b983-4001-b8b6-f55b1d077e08" containerName="placement-db-sync" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.246403 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="505474ad-b983-4001-b8b6-f55b1d077e08" containerName="placement-db-sync" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.246548 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d55233-43ac-42a0-b604-e38f7bafa346" containerName="barbican-db-sync" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.246557 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="505474ad-b983-4001-b8b6-f55b1d077e08" containerName="placement-db-sync" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.246572 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc571d78-a30b-48ae-9687-31f5b6826a12" containerName="keystone-bootstrap" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.246582 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d7b3a4-0e06-481c-8231-9aa12929da2c" containerName="glance-log" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.246592 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d7b3a4-0e06-481c-8231-9aa12929da2c" containerName="glance-httpd" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.247524 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.250475 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.250652 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.254998 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.312345 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5d7b3a4-0e06-481c-8231-9aa12929da2c" path="/var/lib/kubelet/pods/a5d7b3a4-0e06-481c-8231-9aa12929da2c/volumes" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.410539 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.410605 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.410750 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.410911 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6740124e-468c-4527-af23-511164f5724e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.410942 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6740124e-468c-4527-af23-511164f5724e-logs\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.411037 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44nx8\" (UniqueName: \"kubernetes.io/projected/6740124e-468c-4527-af23-511164f5724e-kube-api-access-44nx8\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.411093 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-scripts\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.411131 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-config-data\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.512628 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44nx8\" (UniqueName: \"kubernetes.io/projected/6740124e-468c-4527-af23-511164f5724e-kube-api-access-44nx8\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.512683 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-scripts\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.512710 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-config-data\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.512743 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.512773 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.512813 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.512879 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6740124e-468c-4527-af23-511164f5724e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.512903 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6740124e-468c-4527-af23-511164f5724e-logs\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.513359 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6740124e-468c-4527-af23-511164f5724e-logs\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.514555 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.515868 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6740124e-468c-4527-af23-511164f5724e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.519214 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.522353 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-scripts\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.524180 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-config-data\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.532778 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.533388 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44nx8\" (UniqueName: \"kubernetes.io/projected/6740124e-468c-4527-af23-511164f5724e-kube-api-access-44nx8\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.543132 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.572916 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.848215 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.848925 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.952082 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-66649fdc7d-bbgtq"] Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.956470 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.969154 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.969453 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.969632 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.969702 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 27 14:05:20 crc kubenswrapper[4914]: I0127 14:05:20.969803 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4kzf2" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.005862 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.007138 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.021080 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-scripts\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.021130 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9qnp\" (UniqueName: \"kubernetes.io/projected/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-kube-api-access-p9qnp\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.021169 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-public-tls-certs\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.021228 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-internal-tls-certs\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.021262 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-config-data\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.021280 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-combined-ca-bundle\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.021320 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-logs\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.021427 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-794d7bcbcd-drzqz"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.022985 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.028661 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.029025 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.029065 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.042230 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-84xg2" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.042458 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.045359 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.089121 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-794d7bcbcd-drzqz"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.126909 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-combined-ca-bundle\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.126959 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-logs\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.126986 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-credential-keys\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.127030 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-scripts\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.127051 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-internal-tls-certs\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.127069 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9qnp\" (UniqueName: \"kubernetes.io/projected/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-kube-api-access-p9qnp\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.127094 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-public-tls-certs\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.127118 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-public-tls-certs\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.127148 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-fernet-keys\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.127166 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fjfw\" (UniqueName: \"kubernetes.io/projected/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-kube-api-access-9fjfw\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.127215 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-internal-tls-certs\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.127245 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-config-data\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.127264 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-scripts\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.127285 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-config-data\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.127302 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-combined-ca-bundle\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.128558 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-logs\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.135520 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-config-data\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.138421 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-internal-tls-certs\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.144226 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-scripts\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.145318 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66649fdc7d-bbgtq"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.166308 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9qnp\" (UniqueName: \"kubernetes.io/projected/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-kube-api-access-p9qnp\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.167440 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-public-tls-certs\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.176261 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-combined-ca-bundle\") pod \"placement-66649fdc7d-bbgtq\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.178323 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4","Type":"ContainerStarted","Data":"5b66b6799d1684b59e953ac78f0ddaf825214a6473c31d9c596d47ca49d727f2"} Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.228696 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-combined-ca-bundle\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.228758 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-credential-keys\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.228856 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-internal-tls-certs\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.228896 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-public-tls-certs\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.228937 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-fernet-keys\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.228958 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fjfw\" (UniqueName: \"kubernetes.io/projected/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-kube-api-access-9fjfw\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.229039 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-config-data\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.229059 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-scripts\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.234562 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-internal-tls-certs\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.239586 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-scripts\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.254573 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-credential-keys\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.254683 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-public-tls-certs\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.255165 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-fernet-keys\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.272814 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6d44bd966-mhzhf"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.274628 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.281931 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.282321 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-config-data\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.286969 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-8v69b" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.287658 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fjfw\" (UniqueName: \"kubernetes.io/projected/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-kube-api-access-9fjfw\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.291182 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.296543 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.318890 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-cf9c8fc8c-dzqz5"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.320752 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.323447 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.331968 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-config-data-custom\") pod \"barbican-keystone-listener-6d44bd966-mhzhf\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.332255 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-combined-ca-bundle\") pod \"barbican-keystone-listener-6d44bd966-mhzhf\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.332384 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-combined-ca-bundle\") pod \"barbican-worker-cf9c8fc8c-dzqz5\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.332489 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c325fe3-0885-4dfb-b83f-525bc610fe16-logs\") pod \"barbican-keystone-listener-6d44bd966-mhzhf\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.332611 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp6w4\" (UniqueName: \"kubernetes.io/projected/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-kube-api-access-xp6w4\") pod \"barbican-worker-cf9c8fc8c-dzqz5\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.332704 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-config-data-custom\") pod \"barbican-worker-cf9c8fc8c-dzqz5\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.332793 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6kj9\" (UniqueName: \"kubernetes.io/projected/6c325fe3-0885-4dfb-b83f-525bc610fe16-kube-api-access-r6kj9\") pod \"barbican-keystone-listener-6d44bd966-mhzhf\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.332932 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-config-data\") pod \"barbican-keystone-listener-6d44bd966-mhzhf\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.333037 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-logs\") pod \"barbican-worker-cf9c8fc8c-dzqz5\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.333180 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-config-data\") pod \"barbican-worker-cf9c8fc8c-dzqz5\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.337109 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208bd7a9-58df-4bd1-8eac-ddcb45417fb8-combined-ca-bundle\") pod \"keystone-794d7bcbcd-drzqz\" (UID: \"208bd7a9-58df-4bd1-8eac-ddcb45417fb8\") " pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.355130 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-cf9c8fc8c-dzqz5"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.377268 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.417386 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6d44bd966-mhzhf"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.421817 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.421799744 podStartE2EDuration="7.421799744s" podCreationTimestamp="2026-01-27 14:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:05:21.217143225 +0000 UTC m=+1279.529493320" watchObservedRunningTime="2026-01-27 14:05:21.421799744 +0000 UTC m=+1279.734149829" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.438538 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-combined-ca-bundle\") pod \"barbican-keystone-listener-6d44bd966-mhzhf\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.438579 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-combined-ca-bundle\") pod \"barbican-worker-cf9c8fc8c-dzqz5\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.438608 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c325fe3-0885-4dfb-b83f-525bc610fe16-logs\") pod \"barbican-keystone-listener-6d44bd966-mhzhf\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.438633 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp6w4\" (UniqueName: \"kubernetes.io/projected/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-kube-api-access-xp6w4\") pod \"barbican-worker-cf9c8fc8c-dzqz5\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.438648 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6kj9\" (UniqueName: \"kubernetes.io/projected/6c325fe3-0885-4dfb-b83f-525bc610fe16-kube-api-access-r6kj9\") pod \"barbican-keystone-listener-6d44bd966-mhzhf\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.440120 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c325fe3-0885-4dfb-b83f-525bc610fe16-logs\") pod \"barbican-keystone-listener-6d44bd966-mhzhf\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.440476 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-config-data-custom\") pod \"barbican-worker-cf9c8fc8c-dzqz5\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.440521 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-config-data\") pod \"barbican-keystone-listener-6d44bd966-mhzhf\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.442696 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-logs\") pod \"barbican-worker-cf9c8fc8c-dzqz5\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.442804 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-config-data\") pod \"barbican-worker-cf9c8fc8c-dzqz5\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.442930 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-config-data-custom\") pod \"barbican-keystone-listener-6d44bd966-mhzhf\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.444326 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-logs\") pod \"barbican-worker-cf9c8fc8c-dzqz5\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.450266 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-combined-ca-bundle\") pod \"barbican-keystone-listener-6d44bd966-mhzhf\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.450383 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-config-data-custom\") pod \"barbican-keystone-listener-6d44bd966-mhzhf\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.451122 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-combined-ca-bundle\") pod \"barbican-worker-cf9c8fc8c-dzqz5\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.454344 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-config-data-custom\") pod \"barbican-worker-cf9c8fc8c-dzqz5\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.456910 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-config-data\") pod \"barbican-keystone-listener-6d44bd966-mhzhf\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.464768 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp6w4\" (UniqueName: \"kubernetes.io/projected/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-kube-api-access-xp6w4\") pod \"barbican-worker-cf9c8fc8c-dzqz5\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.471247 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6kj9\" (UniqueName: \"kubernetes.io/projected/6c325fe3-0885-4dfb-b83f-525bc610fe16-kube-api-access-r6kj9\") pod \"barbican-keystone-listener-6d44bd966-mhzhf\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.474546 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-config-data\") pod \"barbican-worker-cf9c8fc8c-dzqz5\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.502349 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5b88564dfc-pk2d6"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.521601 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.553931 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-24nd6"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.560340 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.586497 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87cff560-6d78-4257-b80f-16e6172fc629-logs\") pod \"barbican-worker-5b88564dfc-pk2d6\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.586960 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-config-data-custom\") pod \"barbican-worker-5b88564dfc-pk2d6\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.587132 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-config-data\") pod \"barbican-worker-5b88564dfc-pk2d6\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.587159 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrssb\" (UniqueName: \"kubernetes.io/projected/87cff560-6d78-4257-b80f-16e6172fc629-kube-api-access-mrssb\") pod \"barbican-worker-5b88564dfc-pk2d6\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.587275 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-combined-ca-bundle\") pod \"barbican-worker-5b88564dfc-pk2d6\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.594093 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b88564dfc-pk2d6"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.621618 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-24nd6"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.642936 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-c4dfd54dd-q4t9s"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.644727 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.688561 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-dns-swift-storage-0\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.688797 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-ovsdbserver-nb\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.688929 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-config\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.689060 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87cff560-6d78-4257-b80f-16e6172fc629-logs\") pod \"barbican-worker-5b88564dfc-pk2d6\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.689189 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-ovsdbserver-sb\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.689367 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98dwq\" (UniqueName: \"kubernetes.io/projected/3e59f256-f1bc-4557-aeff-60d3982f292e-kube-api-access-98dwq\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.689485 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-config-data-custom\") pod \"barbican-worker-5b88564dfc-pk2d6\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.689989 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-config-data\") pod \"barbican-worker-5b88564dfc-pk2d6\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.690043 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrssb\" (UniqueName: \"kubernetes.io/projected/87cff560-6d78-4257-b80f-16e6172fc629-kube-api-access-mrssb\") pod \"barbican-worker-5b88564dfc-pk2d6\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.690105 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-dns-svc\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.690161 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-combined-ca-bundle\") pod \"barbican-worker-5b88564dfc-pk2d6\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.692663 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87cff560-6d78-4257-b80f-16e6172fc629-logs\") pod \"barbican-worker-5b88564dfc-pk2d6\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.696527 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-combined-ca-bundle\") pod \"barbican-worker-5b88564dfc-pk2d6\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.698868 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-config-data\") pod \"barbican-worker-5b88564dfc-pk2d6\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.699133 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-config-data-custom\") pod \"barbican-worker-5b88564dfc-pk2d6\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.703265 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.706945 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c4dfd54dd-q4t9s"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.737594 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.740494 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrssb\" (UniqueName: \"kubernetes.io/projected/87cff560-6d78-4257-b80f-16e6172fc629-kube-api-access-mrssb\") pod \"barbican-worker-5b88564dfc-pk2d6\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.754592 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-78d766f864-q9f4n"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.756566 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.761400 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.764533 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.768582 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78d766f864-q9f4n"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.791873 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-ovsdbserver-sb\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.791914 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-combined-ca-bundle\") pod \"barbican-keystone-listener-c4dfd54dd-q4t9s\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.791939 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98dwq\" (UniqueName: \"kubernetes.io/projected/3e59f256-f1bc-4557-aeff-60d3982f292e-kube-api-access-98dwq\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.791998 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-logs\") pod \"barbican-keystone-listener-c4dfd54dd-q4t9s\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.792037 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-dns-svc\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.792067 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdj7z\" (UniqueName: \"kubernetes.io/projected/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-kube-api-access-jdj7z\") pod \"barbican-keystone-listener-c4dfd54dd-q4t9s\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.792086 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-config-data\") pod \"barbican-keystone-listener-c4dfd54dd-q4t9s\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.792125 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-dns-swift-storage-0\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.792144 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-ovsdbserver-nb\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.792166 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-config-data-custom\") pod \"barbican-keystone-listener-c4dfd54dd-q4t9s\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.792185 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-config\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.793350 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-ovsdbserver-sb\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.794188 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-dns-swift-storage-0\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.794410 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-dns-svc\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.796724 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-ovsdbserver-nb\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.821049 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-config\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.827135 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98dwq\" (UniqueName: \"kubernetes.io/projected/3e59f256-f1bc-4557-aeff-60d3982f292e-kube-api-access-98dwq\") pod \"dnsmasq-dns-6554f656b5-24nd6\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.893517 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-combined-ca-bundle\") pod \"barbican-keystone-listener-c4dfd54dd-q4t9s\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.893779 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-config-data\") pod \"barbican-api-78d766f864-q9f4n\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.893826 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnq6m\" (UniqueName: \"kubernetes.io/projected/b004d6c6-2289-4b0c-8779-ab5a36e853ee-kube-api-access-nnq6m\") pod \"barbican-api-78d766f864-q9f4n\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.893881 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-logs\") pod \"barbican-keystone-listener-c4dfd54dd-q4t9s\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.893916 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-combined-ca-bundle\") pod \"barbican-api-78d766f864-q9f4n\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.893982 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdj7z\" (UniqueName: \"kubernetes.io/projected/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-kube-api-access-jdj7z\") pod \"barbican-keystone-listener-c4dfd54dd-q4t9s\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.894013 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-config-data\") pod \"barbican-keystone-listener-c4dfd54dd-q4t9s\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.894062 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-config-data-custom\") pod \"barbican-api-78d766f864-q9f4n\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.894082 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b004d6c6-2289-4b0c-8779-ab5a36e853ee-logs\") pod \"barbican-api-78d766f864-q9f4n\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.894107 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-config-data-custom\") pod \"barbican-keystone-listener-c4dfd54dd-q4t9s\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.894810 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-logs\") pod \"barbican-keystone-listener-c4dfd54dd-q4t9s\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.907758 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-combined-ca-bundle\") pod \"barbican-keystone-listener-c4dfd54dd-q4t9s\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.908108 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-config-data\") pod \"barbican-keystone-listener-c4dfd54dd-q4t9s\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.910731 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-config-data-custom\") pod \"barbican-keystone-listener-c4dfd54dd-q4t9s\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.928158 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdj7z\" (UniqueName: \"kubernetes.io/projected/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-kube-api-access-jdj7z\") pod \"barbican-keystone-listener-c4dfd54dd-q4t9s\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.933268 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.937217 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.977153 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66649fdc7d-bbgtq"] Jan 27 14:05:21 crc kubenswrapper[4914]: I0127 14:05:21.977175 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:21.997903 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-config-data\") pod \"barbican-api-78d766f864-q9f4n\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:21.997952 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnq6m\" (UniqueName: \"kubernetes.io/projected/b004d6c6-2289-4b0c-8779-ab5a36e853ee-kube-api-access-nnq6m\") pod \"barbican-api-78d766f864-q9f4n\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:21.997997 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-combined-ca-bundle\") pod \"barbican-api-78d766f864-q9f4n\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:21.998067 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-config-data-custom\") pod \"barbican-api-78d766f864-q9f4n\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:21.998091 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b004d6c6-2289-4b0c-8779-ab5a36e853ee-logs\") pod \"barbican-api-78d766f864-q9f4n\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:21.998507 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b004d6c6-2289-4b0c-8779-ab5a36e853ee-logs\") pod \"barbican-api-78d766f864-q9f4n\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:22.006772 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-combined-ca-bundle\") pod \"barbican-api-78d766f864-q9f4n\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:22.007427 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-config-data\") pod \"barbican-api-78d766f864-q9f4n\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:22.008383 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-config-data-custom\") pod \"barbican-api-78d766f864-q9f4n\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:22.022264 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnq6m\" (UniqueName: \"kubernetes.io/projected/b004d6c6-2289-4b0c-8779-ab5a36e853ee-kube-api-access-nnq6m\") pod \"barbican-api-78d766f864-q9f4n\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:22.128611 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-794d7bcbcd-drzqz"] Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:22.192855 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:22.227183 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66649fdc7d-bbgtq" event={"ID":"87a4ea21-5742-4829-aa81-d62dc8f5f5e4","Type":"ContainerStarted","Data":"1d21194b9e36141864e8420bbea6735d6fb5887e339b905f5c649535dcd02967"} Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:22.232009 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6740124e-468c-4527-af23-511164f5724e","Type":"ContainerStarted","Data":"73d1f7b3615ab9f919d43b7987dd286ea16dd5ad0dbf7060458a7a75be0f0fbf"} Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:22.236189 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-794d7bcbcd-drzqz" event={"ID":"208bd7a9-58df-4bd1-8eac-ddcb45417fb8","Type":"ContainerStarted","Data":"44c8e04064aa39ad3a449ea6e60e1a19731f46b7e332b289699d67aedbd95bcf"} Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:22.395182 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6d44bd966-mhzhf"] Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:22.413775 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-cf9c8fc8c-dzqz5"] Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:22.686954 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-24nd6"] Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:22.764775 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b88564dfc-pk2d6"] Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:22.875907 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c4dfd54dd-q4t9s"] Jan 27 14:05:22 crc kubenswrapper[4914]: I0127 14:05:22.982333 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78d766f864-q9f4n"] Jan 27 14:05:23 crc kubenswrapper[4914]: I0127 14:05:23.268928 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6740124e-468c-4527-af23-511164f5724e","Type":"ContainerStarted","Data":"ed266c4704b0c791690925a27eb19d92279caa1b06e68dbe9d32b89aa36acba1"} Jan 27 14:05:23 crc kubenswrapper[4914]: I0127 14:05:23.273252 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-24nd6" event={"ID":"3e59f256-f1bc-4557-aeff-60d3982f292e","Type":"ContainerStarted","Data":"16e9e8096810b680108c647647ea2c138c325ff370492fcfeb4e85da587a8e50"} Jan 27 14:05:23 crc kubenswrapper[4914]: I0127 14:05:23.275527 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" event={"ID":"b7a3f205-3ea1-491b-af09-8e2ad479e0a5","Type":"ContainerStarted","Data":"75e5259d505fc3660d895bf43d44e53e745881d2ffe220b8530e398971255ebd"} Jan 27 14:05:23 crc kubenswrapper[4914]: I0127 14:05:23.277384 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-794d7bcbcd-drzqz" event={"ID":"208bd7a9-58df-4bd1-8eac-ddcb45417fb8","Type":"ContainerStarted","Data":"15950f6b51f36a52a55528756a18484c93b0f4aa0a14bd8598d10c3ec6a3a68f"} Jan 27 14:05:23 crc kubenswrapper[4914]: I0127 14:05:23.277985 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:23 crc kubenswrapper[4914]: I0127 14:05:23.282651 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66649fdc7d-bbgtq" event={"ID":"87a4ea21-5742-4829-aa81-d62dc8f5f5e4","Type":"ContainerStarted","Data":"53195d022deb479b90e9aec14958fe34410a4cd991583080f554d41674f67fba"} Jan 27 14:05:23 crc kubenswrapper[4914]: I0127 14:05:23.284362 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" event={"ID":"6c325fe3-0885-4dfb-b83f-525bc610fe16","Type":"ContainerStarted","Data":"93b0747ead5756d7a705a74d40235275d20a3115c0c78b2746a3715ec02b0cd2"} Jan 27 14:05:23 crc kubenswrapper[4914]: I0127 14:05:23.285250 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78d766f864-q9f4n" event={"ID":"b004d6c6-2289-4b0c-8779-ab5a36e853ee","Type":"ContainerStarted","Data":"e4c6eec45ff742f0564138c8cc43bdf404442dde9becc4b8c7c4d90f74b00ee6"} Jan 27 14:05:23 crc kubenswrapper[4914]: I0127 14:05:23.286530 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" event={"ID":"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2","Type":"ContainerStarted","Data":"99aa37093bd8759ff030e0507594d8e85f485cf319fbb0c0b97ffb9db3aefb2e"} Jan 27 14:05:23 crc kubenswrapper[4914]: I0127 14:05:23.308526 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b88564dfc-pk2d6" event={"ID":"87cff560-6d78-4257-b80f-16e6172fc629","Type":"ContainerStarted","Data":"0a8b590f184799ed897617013f9ff7e96efc2a0b5efefbc376a1e6a2cc257cc6"} Jan 27 14:05:23 crc kubenswrapper[4914]: I0127 14:05:23.359542 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-794d7bcbcd-drzqz" podStartSLOduration=3.359523929 podStartE2EDuration="3.359523929s" podCreationTimestamp="2026-01-27 14:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:05:23.354197503 +0000 UTC m=+1281.666547588" watchObservedRunningTime="2026-01-27 14:05:23.359523929 +0000 UTC m=+1281.671874024" Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.349254 4914 generic.go:334] "Generic (PLEG): container finished" podID="37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1" containerID="cebd09a8c949395b3a9f23c0b61f4017ca5fbe61ee1d3120edfef929567ede6e" exitCode=0 Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.349354 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-t8d7p" event={"ID":"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1","Type":"ContainerDied","Data":"cebd09a8c949395b3a9f23c0b61f4017ca5fbe61ee1d3120edfef929567ede6e"} Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.355438 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78d766f864-q9f4n" event={"ID":"b004d6c6-2289-4b0c-8779-ab5a36e853ee","Type":"ContainerStarted","Data":"950934a28b67a4a5b0a56b08f76e6500884ccce199b3e557cd45b2444f12e931"} Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.355534 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78d766f864-q9f4n" event={"ID":"b004d6c6-2289-4b0c-8779-ab5a36e853ee","Type":"ContainerStarted","Data":"fc8a11d54a876b3fb4ee305d207fd66e4bd1d465f7d67c2dfc4b843b52319df4"} Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.356405 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.356447 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.372761 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66649fdc7d-bbgtq" event={"ID":"87a4ea21-5742-4829-aa81-d62dc8f5f5e4","Type":"ContainerStarted","Data":"0693abc604aca9f014fe8a04f902607e1280981f6825b260397c48b039c3a193"} Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.373138 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.373180 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.377500 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6740124e-468c-4527-af23-511164f5724e","Type":"ContainerStarted","Data":"164c4727d96289944656185b7284840d313c02c08f0a5556e8af415193a67775"} Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.380048 4914 generic.go:334] "Generic (PLEG): container finished" podID="3e59f256-f1bc-4557-aeff-60d3982f292e" containerID="59f1ff7e15f9fcd781c22209ccc9a56a5ffcc8a51b88058cb080c490e0719a4e" exitCode=0 Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.380893 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-24nd6" event={"ID":"3e59f256-f1bc-4557-aeff-60d3982f292e","Type":"ContainerDied","Data":"59f1ff7e15f9fcd781c22209ccc9a56a5ffcc8a51b88058cb080c490e0719a4e"} Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.401329 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-78d766f864-q9f4n" podStartSLOduration=3.40131135 podStartE2EDuration="3.40131135s" podCreationTimestamp="2026-01-27 14:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:05:24.385172338 +0000 UTC m=+1282.697522423" watchObservedRunningTime="2026-01-27 14:05:24.40131135 +0000 UTC m=+1282.713661425" Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.426884 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.427304 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.462170 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-66649fdc7d-bbgtq" podStartSLOduration=4.462150717 podStartE2EDuration="4.462150717s" podCreationTimestamp="2026-01-27 14:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:05:24.421680768 +0000 UTC m=+1282.734031313" watchObservedRunningTime="2026-01-27 14:05:24.462150717 +0000 UTC m=+1282.774500802" Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.467274 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.469867 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.469856778 podStartE2EDuration="4.469856778s" podCreationTimestamp="2026-01-27 14:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:05:24.449503991 +0000 UTC m=+1282.761854076" watchObservedRunningTime="2026-01-27 14:05:24.469856778 +0000 UTC m=+1282.782206863" Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.495437 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.883640 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-845f6ddb76-569qx"] Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.885695 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.891346 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.891440 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 27 14:05:24 crc kubenswrapper[4914]: I0127 14:05:24.896210 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-845f6ddb76-569qx"] Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.050217 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20138273-cae3-4cc1-960f-f861eca72126-logs\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.050281 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxsv7\" (UniqueName: \"kubernetes.io/projected/20138273-cae3-4cc1-960f-f861eca72126-kube-api-access-sxsv7\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.050353 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-internal-tls-certs\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.050372 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-public-tls-certs\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.050467 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-config-data\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.050519 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-config-data-custom\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.050569 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-combined-ca-bundle\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.152769 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-config-data\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.152822 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-config-data-custom\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.152860 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-combined-ca-bundle\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.152898 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20138273-cae3-4cc1-960f-f861eca72126-logs\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.152933 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxsv7\" (UniqueName: \"kubernetes.io/projected/20138273-cae3-4cc1-960f-f861eca72126-kube-api-access-sxsv7\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.152991 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-internal-tls-certs\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.153007 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-public-tls-certs\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.155666 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20138273-cae3-4cc1-960f-f861eca72126-logs\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.158987 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-internal-tls-certs\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.160154 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-public-tls-certs\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.161761 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-config-data\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.162678 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-combined-ca-bundle\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.175656 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxsv7\" (UniqueName: \"kubernetes.io/projected/20138273-cae3-4cc1-960f-f861eca72126-kube-api-access-sxsv7\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.179781 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-config-data-custom\") pod \"barbican-api-845f6ddb76-569qx\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.272269 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.394340 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.394651 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.847202 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-845f6ddb76-569qx"] Jan 27 14:05:25 crc kubenswrapper[4914]: W0127 14:05:25.871419 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20138273_cae3_4cc1_960f_f861eca72126.slice/crio-e7cd4b3d595b109430e24a80be64d22e285ee5a5b35702e49207ebb1ae2ee3bf WatchSource:0}: Error finding container e7cd4b3d595b109430e24a80be64d22e285ee5a5b35702e49207ebb1ae2ee3bf: Status 404 returned error can't find the container with id e7cd4b3d595b109430e24a80be64d22e285ee5a5b35702e49207ebb1ae2ee3bf Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.877920 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-t8d7p" Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.969559 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-combined-ca-bundle\") pod \"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1\" (UID: \"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1\") " Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.969867 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-config\") pod \"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1\" (UID: \"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1\") " Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.969907 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ftd6\" (UniqueName: \"kubernetes.io/projected/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-kube-api-access-4ftd6\") pod \"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1\" (UID: \"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1\") " Jan 27 14:05:25 crc kubenswrapper[4914]: I0127 14:05:25.976583 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-kube-api-access-4ftd6" (OuterVolumeSpecName: "kube-api-access-4ftd6") pod "37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1" (UID: "37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1"). InnerVolumeSpecName "kube-api-access-4ftd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.072224 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ftd6\" (UniqueName: \"kubernetes.io/projected/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-kube-api-access-4ftd6\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.085925 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1" (UID: "37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.091925 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-config" (OuterVolumeSpecName: "config") pod "37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1" (UID: "37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.174665 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.174691 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.404505 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" event={"ID":"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2","Type":"ContainerStarted","Data":"cbfc5f728474a52fada3ab88ec9a39c8b04a7b6a0ba129edc6d60b9402c56156"} Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.404580 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" event={"ID":"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2","Type":"ContainerStarted","Data":"815802d2387e85fb93e089c9e45488916c87c9e2c180e91e4697b2e55b3d8c29"} Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.408572 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" event={"ID":"6c325fe3-0885-4dfb-b83f-525bc610fe16","Type":"ContainerStarted","Data":"f341af659fe755d7f218a580ad08fbe8b91904f5e1144bbc134dc0699dc098ce"} Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.408601 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" event={"ID":"6c325fe3-0885-4dfb-b83f-525bc610fe16","Type":"ContainerStarted","Data":"e8dcb4f2b8c965415efca450fa703b3b94db6096741dce0654ecd97b3d817d68"} Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.413022 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b88564dfc-pk2d6" event={"ID":"87cff560-6d78-4257-b80f-16e6172fc629","Type":"ContainerStarted","Data":"3f984ff47bc02ee131274ec13f98b917eed6d7c0984b5f4ad9ed23f20d0a300b"} Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.413299 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b88564dfc-pk2d6" event={"ID":"87cff560-6d78-4257-b80f-16e6172fc629","Type":"ContainerStarted","Data":"1e43ccef9ed27cc09e85bb71803810d7f2a701d33b94b88b7c80b4666ba13daa"} Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.417418 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-24nd6" event={"ID":"3e59f256-f1bc-4557-aeff-60d3982f292e","Type":"ContainerStarted","Data":"dca1806c76f38b4543a5c8078cbb7002459a92385318ad1c7be971b2aeb94f8a"} Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.418647 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.435725 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-t8d7p" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.436585 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-t8d7p" event={"ID":"37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1","Type":"ContainerDied","Data":"cc3edc9f9d649c31d23191adad989afd6914eb775ef0f941131d56aa66561b6a"} Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.436617 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc3edc9f9d649c31d23191adad989afd6914eb775ef0f941131d56aa66561b6a" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.443409 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-845f6ddb76-569qx" event={"ID":"20138273-cae3-4cc1-960f-f861eca72126","Type":"ContainerStarted","Data":"d66e8fa0cb9b6228361d9ce732e443af3fc91ee40165c73befc436dc06b20719"} Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.443460 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-845f6ddb76-569qx" event={"ID":"20138273-cae3-4cc1-960f-f861eca72126","Type":"ContainerStarted","Data":"e7cd4b3d595b109430e24a80be64d22e285ee5a5b35702e49207ebb1ae2ee3bf"} Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.444337 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" podStartSLOduration=2.754809022 podStartE2EDuration="5.44431856s" podCreationTimestamp="2026-01-27 14:05:21 +0000 UTC" firstStartedPulling="2026-01-27 14:05:22.430270862 +0000 UTC m=+1280.742620947" lastFinishedPulling="2026-01-27 14:05:25.11978039 +0000 UTC m=+1283.432130485" observedRunningTime="2026-01-27 14:05:26.430159602 +0000 UTC m=+1284.742509677" watchObservedRunningTime="2026-01-27 14:05:26.44431856 +0000 UTC m=+1284.756668645" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.457454 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" event={"ID":"b7a3f205-3ea1-491b-af09-8e2ad479e0a5","Type":"ContainerStarted","Data":"443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671"} Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.457494 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" event={"ID":"b7a3f205-3ea1-491b-af09-8e2ad479e0a5","Type":"ContainerStarted","Data":"38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c"} Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.464100 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6554f656b5-24nd6" podStartSLOduration=5.464078432 podStartE2EDuration="5.464078432s" podCreationTimestamp="2026-01-27 14:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:05:26.456541695 +0000 UTC m=+1284.768891770" watchObservedRunningTime="2026-01-27 14:05:26.464078432 +0000 UTC m=+1284.776428517" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.479693 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" podStartSLOduration=2.7686360309999998 podStartE2EDuration="5.479662979s" podCreationTimestamp="2026-01-27 14:05:21 +0000 UTC" firstStartedPulling="2026-01-27 14:05:22.412959748 +0000 UTC m=+1280.725309833" lastFinishedPulling="2026-01-27 14:05:25.123986696 +0000 UTC m=+1283.436336781" observedRunningTime="2026-01-27 14:05:26.47863762 +0000 UTC m=+1284.790987705" watchObservedRunningTime="2026-01-27 14:05:26.479662979 +0000 UTC m=+1284.792013054" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.550094 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5b88564dfc-pk2d6" podStartSLOduration=3.252741939 podStartE2EDuration="5.550072929s" podCreationTimestamp="2026-01-27 14:05:21 +0000 UTC" firstStartedPulling="2026-01-27 14:05:22.826688597 +0000 UTC m=+1281.139038682" lastFinishedPulling="2026-01-27 14:05:25.124019557 +0000 UTC m=+1283.436369672" observedRunningTime="2026-01-27 14:05:26.533572426 +0000 UTC m=+1284.845922501" watchObservedRunningTime="2026-01-27 14:05:26.550072929 +0000 UTC m=+1284.862423014" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.593962 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" podStartSLOduration=3.446756094 podStartE2EDuration="5.59393573s" podCreationTimestamp="2026-01-27 14:05:21 +0000 UTC" firstStartedPulling="2026-01-27 14:05:22.966123507 +0000 UTC m=+1281.278473592" lastFinishedPulling="2026-01-27 14:05:25.113303133 +0000 UTC m=+1283.425653228" observedRunningTime="2026-01-27 14:05:26.567370733 +0000 UTC m=+1284.879720828" watchObservedRunningTime="2026-01-27 14:05:26.59393573 +0000 UTC m=+1284.906285815" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.641790 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-cf9c8fc8c-dzqz5"] Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.654989 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-6d44bd966-mhzhf"] Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.674509 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-24nd6"] Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.707928 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hqbkf"] Jan 27 14:05:26 crc kubenswrapper[4914]: E0127 14:05:26.708289 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1" containerName="neutron-db-sync" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.708304 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1" containerName="neutron-db-sync" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.708497 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1" containerName="neutron-db-sync" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.717662 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.727940 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hqbkf"] Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.790758 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.790806 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.790863 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfrgk\" (UniqueName: \"kubernetes.io/projected/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-kube-api-access-nfrgk\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.790909 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-config\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.790928 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.790970 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.892218 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfrgk\" (UniqueName: \"kubernetes.io/projected/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-kube-api-access-nfrgk\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.892284 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-config\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.892307 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.892352 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.892402 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.892423 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.893217 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.893765 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.893935 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.894287 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-config\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.894505 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.933782 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfrgk\" (UniqueName: \"kubernetes.io/projected/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-kube-api-access-nfrgk\") pod \"dnsmasq-dns-7bdf86f46f-hqbkf\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.934736 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8755b5764-8ts5v"] Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.936543 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.939602 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.939815 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.939968 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-mr4pf" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.944850 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.958336 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8755b5764-8ts5v"] Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.995787 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-ovndb-tls-certs\") pod \"neutron-8755b5764-8ts5v\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.995880 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-httpd-config\") pod \"neutron-8755b5764-8ts5v\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.995944 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-combined-ca-bundle\") pod \"neutron-8755b5764-8ts5v\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.995970 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb5c6\" (UniqueName: \"kubernetes.io/projected/9da1cf46-054d-434c-9b77-a82cfd6353f3-kube-api-access-kb5c6\") pod \"neutron-8755b5764-8ts5v\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:26 crc kubenswrapper[4914]: I0127 14:05:26.996049 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-config\") pod \"neutron-8755b5764-8ts5v\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.062620 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.098099 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-ovndb-tls-certs\") pod \"neutron-8755b5764-8ts5v\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.098165 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-httpd-config\") pod \"neutron-8755b5764-8ts5v\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.098196 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-combined-ca-bundle\") pod \"neutron-8755b5764-8ts5v\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.098222 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb5c6\" (UniqueName: \"kubernetes.io/projected/9da1cf46-054d-434c-9b77-a82cfd6353f3-kube-api-access-kb5c6\") pod \"neutron-8755b5764-8ts5v\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.098272 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-config\") pod \"neutron-8755b5764-8ts5v\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.109277 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-httpd-config\") pod \"neutron-8755b5764-8ts5v\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.114751 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-combined-ca-bundle\") pod \"neutron-8755b5764-8ts5v\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.115285 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-ovndb-tls-certs\") pod \"neutron-8755b5764-8ts5v\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.119063 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb5c6\" (UniqueName: \"kubernetes.io/projected/9da1cf46-054d-434c-9b77-a82cfd6353f3-kube-api-access-kb5c6\") pod \"neutron-8755b5764-8ts5v\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.122239 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-config\") pod \"neutron-8755b5764-8ts5v\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.200541 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.538885 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-845f6ddb76-569qx" event={"ID":"20138273-cae3-4cc1-960f-f861eca72126","Type":"ContainerStarted","Data":"4f8600ee4babbcb896697bc14567b430422304335b270b1b132a10c2333c2e66"} Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.539632 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.539664 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.562646 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-845f6ddb76-569qx" podStartSLOduration=3.562627678 podStartE2EDuration="3.562627678s" podCreationTimestamp="2026-01-27 14:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:05:27.562538546 +0000 UTC m=+1285.874888651" watchObservedRunningTime="2026-01-27 14:05:27.562627678 +0000 UTC m=+1285.874977763" Jan 27 14:05:27 crc kubenswrapper[4914]: I0127 14:05:27.833862 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hqbkf"] Jan 27 14:05:28 crc kubenswrapper[4914]: I0127 14:05:28.140385 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8755b5764-8ts5v"] Jan 27 14:05:28 crc kubenswrapper[4914]: W0127 14:05:28.147663 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9da1cf46_054d_434c_9b77_a82cfd6353f3.slice/crio-8a62bcd05b6f41cacdde48ae03cd4bb2599caab5b70eddb5057a44a5403672eb WatchSource:0}: Error finding container 8a62bcd05b6f41cacdde48ae03cd4bb2599caab5b70eddb5057a44a5403672eb: Status 404 returned error can't find the container with id 8a62bcd05b6f41cacdde48ae03cd4bb2599caab5b70eddb5057a44a5403672eb Jan 27 14:05:28 crc kubenswrapper[4914]: I0127 14:05:28.546113 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8755b5764-8ts5v" event={"ID":"9da1cf46-054d-434c-9b77-a82cfd6353f3","Type":"ContainerStarted","Data":"de8dfd4f95672cd21698c72e3cd09e576103393c5966fd311aa5740381d5c43f"} Jan 27 14:05:28 crc kubenswrapper[4914]: I0127 14:05:28.546391 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8755b5764-8ts5v" event={"ID":"9da1cf46-054d-434c-9b77-a82cfd6353f3","Type":"ContainerStarted","Data":"8a62bcd05b6f41cacdde48ae03cd4bb2599caab5b70eddb5057a44a5403672eb"} Jan 27 14:05:28 crc kubenswrapper[4914]: I0127 14:05:28.550021 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8d56k" event={"ID":"131bae56-5108-4750-8056-68133598a109","Type":"ContainerStarted","Data":"2713ae95d90a75a72e0a7e3e2a0de1f5cdaffa265c5206f5b309c3513aec4ea2"} Jan 27 14:05:28 crc kubenswrapper[4914]: I0127 14:05:28.552709 4914 generic.go:334] "Generic (PLEG): container finished" podID="7d2aabe6-c9f1-4002-a2dd-468a56b1b6db" containerID="4889031c0a669378361f7087856dba6a105677abc62a91d5daf27538b0f9d973" exitCode=0 Jan 27 14:05:28 crc kubenswrapper[4914]: I0127 14:05:28.552937 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" podUID="d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2" containerName="barbican-worker-log" containerID="cri-o://815802d2387e85fb93e089c9e45488916c87c9e2c180e91e4697b2e55b3d8c29" gracePeriod=30 Jan 27 14:05:28 crc kubenswrapper[4914]: I0127 14:05:28.553043 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" podUID="d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2" containerName="barbican-worker" containerID="cri-o://cbfc5f728474a52fada3ab88ec9a39c8b04a7b6a0ba129edc6d60b9402c56156" gracePeriod=30 Jan 27 14:05:28 crc kubenswrapper[4914]: I0127 14:05:28.553198 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" event={"ID":"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db","Type":"ContainerDied","Data":"4889031c0a669378361f7087856dba6a105677abc62a91d5daf27538b0f9d973"} Jan 27 14:05:28 crc kubenswrapper[4914]: I0127 14:05:28.553224 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" event={"ID":"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db","Type":"ContainerStarted","Data":"ad4ca58a2b0a464e55c6172fa5cdcdddae12b4821fb0c8870c974182dbb78c71"} Jan 27 14:05:28 crc kubenswrapper[4914]: I0127 14:05:28.553305 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" podUID="6c325fe3-0885-4dfb-b83f-525bc610fe16" containerName="barbican-keystone-listener" containerID="cri-o://f341af659fe755d7f218a580ad08fbe8b91904f5e1144bbc134dc0699dc098ce" gracePeriod=30 Jan 27 14:05:28 crc kubenswrapper[4914]: I0127 14:05:28.553369 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6554f656b5-24nd6" podUID="3e59f256-f1bc-4557-aeff-60d3982f292e" containerName="dnsmasq-dns" containerID="cri-o://dca1806c76f38b4543a5c8078cbb7002459a92385318ad1c7be971b2aeb94f8a" gracePeriod=10 Jan 27 14:05:28 crc kubenswrapper[4914]: I0127 14:05:28.553293 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" podUID="6c325fe3-0885-4dfb-b83f-525bc610fe16" containerName="barbican-keystone-listener-log" containerID="cri-o://e8dcb4f2b8c965415efca450fa703b3b94db6096741dce0654ecd97b3d817d68" gracePeriod=30 Jan 27 14:05:28 crc kubenswrapper[4914]: I0127 14:05:28.648200 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-8d56k" podStartSLOduration=3.621528552 podStartE2EDuration="55.648176489s" podCreationTimestamp="2026-01-27 14:04:33 +0000 UTC" firstStartedPulling="2026-01-27 14:04:34.926096517 +0000 UTC m=+1233.238446602" lastFinishedPulling="2026-01-27 14:05:26.952744454 +0000 UTC m=+1285.265094539" observedRunningTime="2026-01-27 14:05:28.589055068 +0000 UTC m=+1286.901405153" watchObservedRunningTime="2026-01-27 14:05:28.648176489 +0000 UTC m=+1286.960526564" Jan 27 14:05:28 crc kubenswrapper[4914]: I0127 14:05:28.652009 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 14:05:28 crc kubenswrapper[4914]: I0127 14:05:28.652100 4914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.248982 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.260667 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.379417 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-ovsdbserver-sb\") pod \"3e59f256-f1bc-4557-aeff-60d3982f292e\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.379478 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98dwq\" (UniqueName: \"kubernetes.io/projected/3e59f256-f1bc-4557-aeff-60d3982f292e-kube-api-access-98dwq\") pod \"3e59f256-f1bc-4557-aeff-60d3982f292e\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.379562 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-dns-svc\") pod \"3e59f256-f1bc-4557-aeff-60d3982f292e\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.379599 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-config\") pod \"3e59f256-f1bc-4557-aeff-60d3982f292e\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.379633 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-dns-swift-storage-0\") pod \"3e59f256-f1bc-4557-aeff-60d3982f292e\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.379681 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-ovsdbserver-nb\") pod \"3e59f256-f1bc-4557-aeff-60d3982f292e\" (UID: \"3e59f256-f1bc-4557-aeff-60d3982f292e\") " Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.396202 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e59f256-f1bc-4557-aeff-60d3982f292e-kube-api-access-98dwq" (OuterVolumeSpecName: "kube-api-access-98dwq") pod "3e59f256-f1bc-4557-aeff-60d3982f292e" (UID: "3e59f256-f1bc-4557-aeff-60d3982f292e"). InnerVolumeSpecName "kube-api-access-98dwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.479648 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3e59f256-f1bc-4557-aeff-60d3982f292e" (UID: "3e59f256-f1bc-4557-aeff-60d3982f292e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.484439 4914 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.484531 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98dwq\" (UniqueName: \"kubernetes.io/projected/3e59f256-f1bc-4557-aeff-60d3982f292e-kube-api-access-98dwq\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.489850 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-config" (OuterVolumeSpecName: "config") pod "3e59f256-f1bc-4557-aeff-60d3982f292e" (UID: "3e59f256-f1bc-4557-aeff-60d3982f292e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.520218 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3e59f256-f1bc-4557-aeff-60d3982f292e" (UID: "3e59f256-f1bc-4557-aeff-60d3982f292e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.535321 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e59f256-f1bc-4557-aeff-60d3982f292e" (UID: "3e59f256-f1bc-4557-aeff-60d3982f292e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.560705 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3e59f256-f1bc-4557-aeff-60d3982f292e" (UID: "3e59f256-f1bc-4557-aeff-60d3982f292e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.588770 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.589184 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.589194 4914 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.589203 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e59f256-f1bc-4557-aeff-60d3982f292e-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.629806 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" event={"ID":"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db","Type":"ContainerStarted","Data":"fc9acc3b25f034775a4b813744ebdbe02604782a2b7f319204a657660dfede22"} Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.634625 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.660240 4914 generic.go:334] "Generic (PLEG): container finished" podID="d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2" containerID="cbfc5f728474a52fada3ab88ec9a39c8b04a7b6a0ba129edc6d60b9402c56156" exitCode=0 Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.660274 4914 generic.go:334] "Generic (PLEG): container finished" podID="d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2" containerID="815802d2387e85fb93e089c9e45488916c87c9e2c180e91e4697b2e55b3d8c29" exitCode=143 Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.662450 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" event={"ID":"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2","Type":"ContainerDied","Data":"cbfc5f728474a52fada3ab88ec9a39c8b04a7b6a0ba129edc6d60b9402c56156"} Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.662483 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" event={"ID":"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2","Type":"ContainerDied","Data":"815802d2387e85fb93e089c9e45488916c87c9e2c180e91e4697b2e55b3d8c29"} Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.685139 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64bb7f895-7ftxk"] Jan 27 14:05:29 crc kubenswrapper[4914]: E0127 14:05:29.685755 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e59f256-f1bc-4557-aeff-60d3982f292e" containerName="init" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.685773 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e59f256-f1bc-4557-aeff-60d3982f292e" containerName="init" Jan 27 14:05:29 crc kubenswrapper[4914]: E0127 14:05:29.685812 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e59f256-f1bc-4557-aeff-60d3982f292e" containerName="dnsmasq-dns" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.685820 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e59f256-f1bc-4557-aeff-60d3982f292e" containerName="dnsmasq-dns" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.686077 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e59f256-f1bc-4557-aeff-60d3982f292e" containerName="dnsmasq-dns" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.687585 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.697028 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64bb7f895-7ftxk"] Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.697321 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" podStartSLOduration=3.697311181 podStartE2EDuration="3.697311181s" podCreationTimestamp="2026-01-27 14:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:05:29.668023779 +0000 UTC m=+1287.980373864" watchObservedRunningTime="2026-01-27 14:05:29.697311181 +0000 UTC m=+1288.009661266" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.699405 4914 generic.go:334] "Generic (PLEG): container finished" podID="6c325fe3-0885-4dfb-b83f-525bc610fe16" containerID="f341af659fe755d7f218a580ad08fbe8b91904f5e1144bbc134dc0699dc098ce" exitCode=0 Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.699436 4914 generic.go:334] "Generic (PLEG): container finished" podID="6c325fe3-0885-4dfb-b83f-525bc610fe16" containerID="e8dcb4f2b8c965415efca450fa703b3b94db6096741dce0654ecd97b3d817d68" exitCode=143 Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.700313 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.701308 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.705779 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" event={"ID":"6c325fe3-0885-4dfb-b83f-525bc610fe16","Type":"ContainerDied","Data":"f341af659fe755d7f218a580ad08fbe8b91904f5e1144bbc134dc0699dc098ce"} Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.705871 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" event={"ID":"6c325fe3-0885-4dfb-b83f-525bc610fe16","Type":"ContainerDied","Data":"e8dcb4f2b8c965415efca450fa703b3b94db6096741dce0654ecd97b3d817d68"} Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.706043 4914 generic.go:334] "Generic (PLEG): container finished" podID="3e59f256-f1bc-4557-aeff-60d3982f292e" containerID="dca1806c76f38b4543a5c8078cbb7002459a92385318ad1c7be971b2aeb94f8a" exitCode=0 Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.706113 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-24nd6" event={"ID":"3e59f256-f1bc-4557-aeff-60d3982f292e","Type":"ContainerDied","Data":"dca1806c76f38b4543a5c8078cbb7002459a92385318ad1c7be971b2aeb94f8a"} Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.706146 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-24nd6" event={"ID":"3e59f256-f1bc-4557-aeff-60d3982f292e","Type":"ContainerDied","Data":"16e9e8096810b680108c647647ea2c138c325ff370492fcfeb4e85da587a8e50"} Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.706163 4914 scope.go:117] "RemoveContainer" containerID="dca1806c76f38b4543a5c8078cbb7002459a92385318ad1c7be971b2aeb94f8a" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.706317 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-24nd6" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.719954 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8755b5764-8ts5v" event={"ID":"9da1cf46-054d-434c-9b77-a82cfd6353f3","Type":"ContainerStarted","Data":"2c79b93366fe15b0506636708d94a36f41b21e63a852e5e89f23185ea10d7f8d"} Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.728804 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.757254 4914 scope.go:117] "RemoveContainer" containerID="59f1ff7e15f9fcd781c22209ccc9a56a5ffcc8a51b88058cb080c490e0719a4e" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.802138 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-config\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.802191 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-combined-ca-bundle\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.802226 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-public-tls-certs\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.802259 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5hdx\" (UniqueName: \"kubernetes.io/projected/41af7c55-6669-403f-b4cd-e62be1cd1db7-kube-api-access-n5hdx\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.802406 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-ovndb-tls-certs\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.802443 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-internal-tls-certs\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.802565 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-httpd-config\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.864973 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-24nd6"] Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.884972 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-24nd6"] Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.902647 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8755b5764-8ts5v" podStartSLOduration=3.902624007 podStartE2EDuration="3.902624007s" podCreationTimestamp="2026-01-27 14:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:05:29.813494315 +0000 UTC m=+1288.125844400" watchObservedRunningTime="2026-01-27 14:05:29.902624007 +0000 UTC m=+1288.214974092" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.903309 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5hdx\" (UniqueName: \"kubernetes.io/projected/41af7c55-6669-403f-b4cd-e62be1cd1db7-kube-api-access-n5hdx\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.903394 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-ovndb-tls-certs\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.903433 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-internal-tls-certs\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.903473 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-httpd-config\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.903547 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-config\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.903569 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-combined-ca-bundle\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.903587 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-public-tls-certs\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.910794 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-public-tls-certs\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.916312 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-internal-tls-certs\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.922946 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-config\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.923575 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-combined-ca-bundle\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.925530 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-httpd-config\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.927376 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-ovndb-tls-certs\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.936987 4914 scope.go:117] "RemoveContainer" containerID="dca1806c76f38b4543a5c8078cbb7002459a92385318ad1c7be971b2aeb94f8a" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.938674 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5hdx\" (UniqueName: \"kubernetes.io/projected/41af7c55-6669-403f-b4cd-e62be1cd1db7-kube-api-access-n5hdx\") pod \"neutron-64bb7f895-7ftxk\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:29 crc kubenswrapper[4914]: E0127 14:05:29.940998 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dca1806c76f38b4543a5c8078cbb7002459a92385318ad1c7be971b2aeb94f8a\": container with ID starting with dca1806c76f38b4543a5c8078cbb7002459a92385318ad1c7be971b2aeb94f8a not found: ID does not exist" containerID="dca1806c76f38b4543a5c8078cbb7002459a92385318ad1c7be971b2aeb94f8a" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.941046 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dca1806c76f38b4543a5c8078cbb7002459a92385318ad1c7be971b2aeb94f8a"} err="failed to get container status \"dca1806c76f38b4543a5c8078cbb7002459a92385318ad1c7be971b2aeb94f8a\": rpc error: code = NotFound desc = could not find container \"dca1806c76f38b4543a5c8078cbb7002459a92385318ad1c7be971b2aeb94f8a\": container with ID starting with dca1806c76f38b4543a5c8078cbb7002459a92385318ad1c7be971b2aeb94f8a not found: ID does not exist" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.941071 4914 scope.go:117] "RemoveContainer" containerID="59f1ff7e15f9fcd781c22209ccc9a56a5ffcc8a51b88058cb080c490e0719a4e" Jan 27 14:05:29 crc kubenswrapper[4914]: E0127 14:05:29.956966 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59f1ff7e15f9fcd781c22209ccc9a56a5ffcc8a51b88058cb080c490e0719a4e\": container with ID starting with 59f1ff7e15f9fcd781c22209ccc9a56a5ffcc8a51b88058cb080c490e0719a4e not found: ID does not exist" containerID="59f1ff7e15f9fcd781c22209ccc9a56a5ffcc8a51b88058cb080c490e0719a4e" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.957017 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59f1ff7e15f9fcd781c22209ccc9a56a5ffcc8a51b88058cb080c490e0719a4e"} err="failed to get container status \"59f1ff7e15f9fcd781c22209ccc9a56a5ffcc8a51b88058cb080c490e0719a4e\": rpc error: code = NotFound desc = could not find container \"59f1ff7e15f9fcd781c22209ccc9a56a5ffcc8a51b88058cb080c490e0719a4e\": container with ID starting with 59f1ff7e15f9fcd781c22209ccc9a56a5ffcc8a51b88058cb080c490e0719a4e not found: ID does not exist" Jan 27 14:05:29 crc kubenswrapper[4914]: I0127 14:05:29.996178 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.029659 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.036187 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.106646 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp6w4\" (UniqueName: \"kubernetes.io/projected/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-kube-api-access-xp6w4\") pod \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.106759 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-config-data-custom\") pod \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.106850 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-config-data\") pod \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.106963 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-logs\") pod \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.107257 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-combined-ca-bundle\") pod \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\" (UID: \"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2\") " Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.107323 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6kj9\" (UniqueName: \"kubernetes.io/projected/6c325fe3-0885-4dfb-b83f-525bc610fe16-kube-api-access-r6kj9\") pod \"6c325fe3-0885-4dfb-b83f-525bc610fe16\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.107333 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-logs" (OuterVolumeSpecName: "logs") pod "d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2" (UID: "d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.107678 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.110756 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2" (UID: "d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.130049 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c325fe3-0885-4dfb-b83f-525bc610fe16-kube-api-access-r6kj9" (OuterVolumeSpecName: "kube-api-access-r6kj9") pod "6c325fe3-0885-4dfb-b83f-525bc610fe16" (UID: "6c325fe3-0885-4dfb-b83f-525bc610fe16"). InnerVolumeSpecName "kube-api-access-r6kj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.150994 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-kube-api-access-xp6w4" (OuterVolumeSpecName: "kube-api-access-xp6w4") pod "d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2" (UID: "d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2"). InnerVolumeSpecName "kube-api-access-xp6w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.224412 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-config-data-custom\") pod \"6c325fe3-0885-4dfb-b83f-525bc610fe16\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.224712 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c325fe3-0885-4dfb-b83f-525bc610fe16-logs\") pod \"6c325fe3-0885-4dfb-b83f-525bc610fe16\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.224854 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-combined-ca-bundle\") pod \"6c325fe3-0885-4dfb-b83f-525bc610fe16\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.224911 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-config-data\") pod \"6c325fe3-0885-4dfb-b83f-525bc610fe16\" (UID: \"6c325fe3-0885-4dfb-b83f-525bc610fe16\") " Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.225313 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6kj9\" (UniqueName: \"kubernetes.io/projected/6c325fe3-0885-4dfb-b83f-525bc610fe16-kube-api-access-r6kj9\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.225323 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp6w4\" (UniqueName: \"kubernetes.io/projected/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-kube-api-access-xp6w4\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.225333 4914 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.226471 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c325fe3-0885-4dfb-b83f-525bc610fe16-logs" (OuterVolumeSpecName: "logs") pod "6c325fe3-0885-4dfb-b83f-525bc610fe16" (UID: "6c325fe3-0885-4dfb-b83f-525bc610fe16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.243060 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6c325fe3-0885-4dfb-b83f-525bc610fe16" (UID: "6c325fe3-0885-4dfb-b83f-525bc610fe16"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.266015 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2" (UID: "d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.287932 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c325fe3-0885-4dfb-b83f-525bc610fe16" (UID: "6c325fe3-0885-4dfb-b83f-525bc610fe16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.299148 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-config-data" (OuterVolumeSpecName: "config-data") pod "d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2" (UID: "d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.319928 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e59f256-f1bc-4557-aeff-60d3982f292e" path="/var/lib/kubelet/pods/3e59f256-f1bc-4557-aeff-60d3982f292e/volumes" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.327153 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.327183 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.327196 4914 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.327206 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c325fe3-0885-4dfb-b83f-525bc610fe16-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.327216 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.334032 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-config-data" (OuterVolumeSpecName: "config-data") pod "6c325fe3-0885-4dfb-b83f-525bc610fe16" (UID: "6c325fe3-0885-4dfb-b83f-525bc610fe16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.429783 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c325fe3-0885-4dfb-b83f-525bc610fe16-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.573852 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.574241 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.634688 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.653057 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64bb7f895-7ftxk"] Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.682162 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.730797 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bb7f895-7ftxk" event={"ID":"41af7c55-6669-403f-b4cd-e62be1cd1db7","Type":"ContainerStarted","Data":"4900f5319fcdc6e9760f9a359594ff584f22e880ac5761dd1dfbe01c9146e1f4"} Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.733052 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" event={"ID":"d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2","Type":"ContainerDied","Data":"99aa37093bd8759ff030e0507594d8e85f485cf319fbb0c0b97ffb9db3aefb2e"} Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.733089 4914 scope.go:117] "RemoveContainer" containerID="cbfc5f728474a52fada3ab88ec9a39c8b04a7b6a0ba129edc6d60b9402c56156" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.733179 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-cf9c8fc8c-dzqz5" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.765263 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" event={"ID":"6c325fe3-0885-4dfb-b83f-525bc610fe16","Type":"ContainerDied","Data":"93b0747ead5756d7a705a74d40235275d20a3115c0c78b2746a3715ec02b0cd2"} Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.765347 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d44bd966-mhzhf" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.766380 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.766400 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.853856 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-75b4645c86-9r9q2" podUID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.862421 4914 scope.go:117] "RemoveContainer" containerID="815802d2387e85fb93e089c9e45488916c87c9e2c180e91e4697b2e55b3d8c29" Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.893277 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-cf9c8fc8c-dzqz5"] Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.951988 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-cf9c8fc8c-dzqz5"] Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.961916 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-6d44bd966-mhzhf"] Jan 27 14:05:30 crc kubenswrapper[4914]: I0127 14:05:30.970008 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-6d44bd966-mhzhf"] Jan 27 14:05:31 crc kubenswrapper[4914]: I0127 14:05:31.002875 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6bb6c77c5d-pwr6c" podUID="d7209cbb-e572-463b-bb43-9805cd58ea57" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 27 14:05:31 crc kubenswrapper[4914]: I0127 14:05:31.779468 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bb7f895-7ftxk" event={"ID":"41af7c55-6669-403f-b4cd-e62be1cd1db7","Type":"ContainerStarted","Data":"aca192b82e5b342a6db1a51463adf47549942e55078ccb87c3e0dcf8b9b6c353"} Jan 27 14:05:32 crc kubenswrapper[4914]: I0127 14:05:32.310769 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c325fe3-0885-4dfb-b83f-525bc610fe16" path="/var/lib/kubelet/pods/6c325fe3-0885-4dfb-b83f-525bc610fe16/volumes" Jan 27 14:05:32 crc kubenswrapper[4914]: I0127 14:05:32.311462 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2" path="/var/lib/kubelet/pods/d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2/volumes" Jan 27 14:05:32 crc kubenswrapper[4914]: I0127 14:05:32.803067 4914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:05:32 crc kubenswrapper[4914]: I0127 14:05:32.803091 4914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:05:33 crc kubenswrapper[4914]: I0127 14:05:33.265516 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 14:05:33 crc kubenswrapper[4914]: I0127 14:05:33.466712 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 14:05:33 crc kubenswrapper[4914]: I0127 14:05:33.954045 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:34 crc kubenswrapper[4914]: I0127 14:05:34.488821 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:34 crc kubenswrapper[4914]: I0127 14:05:34.840750 4914 generic.go:334] "Generic (PLEG): container finished" podID="131bae56-5108-4750-8056-68133598a109" containerID="2713ae95d90a75a72e0a7e3e2a0de1f5cdaffa265c5206f5b309c3513aec4ea2" exitCode=0 Jan 27 14:05:34 crc kubenswrapper[4914]: I0127 14:05:34.841576 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8d56k" event={"ID":"131bae56-5108-4750-8056-68133598a109","Type":"ContainerDied","Data":"2713ae95d90a75a72e0a7e3e2a0de1f5cdaffa265c5206f5b309c3513aec4ea2"} Jan 27 14:05:37 crc kubenswrapper[4914]: I0127 14:05:37.070092 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:37 crc kubenswrapper[4914]: I0127 14:05:37.142987 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-fbv2k"] Jan 27 14:05:37 crc kubenswrapper[4914]: I0127 14:05:37.143230 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" podUID="972edb24-1cfb-4529-bc89-2bb9a89c5579" containerName="dnsmasq-dns" containerID="cri-o://4c77ef4f56b091564b71e02baa818192b820a2b689530436a9b6b957dce2bee2" gracePeriod=10 Jan 27 14:05:37 crc kubenswrapper[4914]: I0127 14:05:37.254271 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" podUID="972edb24-1cfb-4529-bc89-2bb9a89c5579" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: connect: connection refused" Jan 27 14:05:37 crc kubenswrapper[4914]: I0127 14:05:37.270116 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:37 crc kubenswrapper[4914]: I0127 14:05:37.349208 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:05:37 crc kubenswrapper[4914]: I0127 14:05:37.420421 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78d766f864-q9f4n"] Jan 27 14:05:37 crc kubenswrapper[4914]: I0127 14:05:37.420636 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78d766f864-q9f4n" podUID="b004d6c6-2289-4b0c-8779-ab5a36e853ee" containerName="barbican-api-log" containerID="cri-o://fc8a11d54a876b3fb4ee305d207fd66e4bd1d465f7d67c2dfc4b843b52319df4" gracePeriod=30 Jan 27 14:05:37 crc kubenswrapper[4914]: I0127 14:05:37.421070 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78d766f864-q9f4n" podUID="b004d6c6-2289-4b0c-8779-ab5a36e853ee" containerName="barbican-api" containerID="cri-o://950934a28b67a4a5b0a56b08f76e6500884ccce199b3e557cd45b2444f12e931" gracePeriod=30 Jan 27 14:05:37 crc kubenswrapper[4914]: I0127 14:05:37.691079 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:05:37 crc kubenswrapper[4914]: I0127 14:05:37.691140 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:05:37 crc kubenswrapper[4914]: I0127 14:05:37.872013 4914 generic.go:334] "Generic (PLEG): container finished" podID="b004d6c6-2289-4b0c-8779-ab5a36e853ee" containerID="fc8a11d54a876b3fb4ee305d207fd66e4bd1d465f7d67c2dfc4b843b52319df4" exitCode=143 Jan 27 14:05:37 crc kubenswrapper[4914]: I0127 14:05:37.872092 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78d766f864-q9f4n" event={"ID":"b004d6c6-2289-4b0c-8779-ab5a36e853ee","Type":"ContainerDied","Data":"fc8a11d54a876b3fb4ee305d207fd66e4bd1d465f7d67c2dfc4b843b52319df4"} Jan 27 14:05:37 crc kubenswrapper[4914]: I0127 14:05:37.874009 4914 generic.go:334] "Generic (PLEG): container finished" podID="972edb24-1cfb-4529-bc89-2bb9a89c5579" containerID="4c77ef4f56b091564b71e02baa818192b820a2b689530436a9b6b957dce2bee2" exitCode=0 Jan 27 14:05:37 crc kubenswrapper[4914]: I0127 14:05:37.874777 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" event={"ID":"972edb24-1cfb-4529-bc89-2bb9a89c5579","Type":"ContainerDied","Data":"4c77ef4f56b091564b71e02baa818192b820a2b689530436a9b6b957dce2bee2"} Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.519305 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8d56k" Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.664719 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjpzw\" (UniqueName: \"kubernetes.io/projected/131bae56-5108-4750-8056-68133598a109-kube-api-access-bjpzw\") pod \"131bae56-5108-4750-8056-68133598a109\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.664800 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-scripts\") pod \"131bae56-5108-4750-8056-68133598a109\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.664850 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-combined-ca-bundle\") pod \"131bae56-5108-4750-8056-68133598a109\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.664893 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-db-sync-config-data\") pod \"131bae56-5108-4750-8056-68133598a109\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.664943 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/131bae56-5108-4750-8056-68133598a109-etc-machine-id\") pod \"131bae56-5108-4750-8056-68133598a109\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.665090 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-config-data\") pod \"131bae56-5108-4750-8056-68133598a109\" (UID: \"131bae56-5108-4750-8056-68133598a109\") " Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.665608 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/131bae56-5108-4750-8056-68133598a109-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "131bae56-5108-4750-8056-68133598a109" (UID: "131bae56-5108-4750-8056-68133598a109"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.670246 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-scripts" (OuterVolumeSpecName: "scripts") pod "131bae56-5108-4750-8056-68133598a109" (UID: "131bae56-5108-4750-8056-68133598a109"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.670270 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "131bae56-5108-4750-8056-68133598a109" (UID: "131bae56-5108-4750-8056-68133598a109"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.681048 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/131bae56-5108-4750-8056-68133598a109-kube-api-access-bjpzw" (OuterVolumeSpecName: "kube-api-access-bjpzw") pod "131bae56-5108-4750-8056-68133598a109" (UID: "131bae56-5108-4750-8056-68133598a109"). InnerVolumeSpecName "kube-api-access-bjpzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.698711 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "131bae56-5108-4750-8056-68133598a109" (UID: "131bae56-5108-4750-8056-68133598a109"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.720930 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-config-data" (OuterVolumeSpecName: "config-data") pod "131bae56-5108-4750-8056-68133598a109" (UID: "131bae56-5108-4750-8056-68133598a109"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.767379 4914 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.767409 4914 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/131bae56-5108-4750-8056-68133598a109-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.767418 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.767428 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjpzw\" (UniqueName: \"kubernetes.io/projected/131bae56-5108-4750-8056-68133598a109-kube-api-access-bjpzw\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.767438 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.767446 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131bae56-5108-4750-8056-68133598a109-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.885901 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8d56k" event={"ID":"131bae56-5108-4750-8056-68133598a109","Type":"ContainerDied","Data":"074d88f3e34e2094bc80a5aefadd7ae525544076bd0afc5865cc240734ae40f8"} Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.886196 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="074d88f3e34e2094bc80a5aefadd7ae525544076bd0afc5865cc240734ae40f8" Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.885940 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8d56k" Jan 27 14:05:38 crc kubenswrapper[4914]: I0127 14:05:38.980124 4914 scope.go:117] "RemoveContainer" containerID="f341af659fe755d7f218a580ad08fbe8b91904f5e1144bbc134dc0699dc098ce" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.057079 4914 scope.go:117] "RemoveContainer" containerID="e8dcb4f2b8c965415efca450fa703b3b94db6096741dce0654ecd97b3d817d68" Jan 27 14:05:39 crc kubenswrapper[4914]: E0127 14:05:39.271067 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="ec53709e-df2b-4fc9-b9ac-6e144a262455" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.276274 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.386447 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dzdf\" (UniqueName: \"kubernetes.io/projected/972edb24-1cfb-4529-bc89-2bb9a89c5579-kube-api-access-4dzdf\") pod \"972edb24-1cfb-4529-bc89-2bb9a89c5579\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.386593 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-dns-swift-storage-0\") pod \"972edb24-1cfb-4529-bc89-2bb9a89c5579\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.386616 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-ovsdbserver-sb\") pod \"972edb24-1cfb-4529-bc89-2bb9a89c5579\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.386708 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-config\") pod \"972edb24-1cfb-4529-bc89-2bb9a89c5579\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.386799 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-dns-svc\") pod \"972edb24-1cfb-4529-bc89-2bb9a89c5579\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.386822 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-ovsdbserver-nb\") pod \"972edb24-1cfb-4529-bc89-2bb9a89c5579\" (UID: \"972edb24-1cfb-4529-bc89-2bb9a89c5579\") " Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.392246 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/972edb24-1cfb-4529-bc89-2bb9a89c5579-kube-api-access-4dzdf" (OuterVolumeSpecName: "kube-api-access-4dzdf") pod "972edb24-1cfb-4529-bc89-2bb9a89c5579" (UID: "972edb24-1cfb-4529-bc89-2bb9a89c5579"). InnerVolumeSpecName "kube-api-access-4dzdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.433847 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-config" (OuterVolumeSpecName: "config") pod "972edb24-1cfb-4529-bc89-2bb9a89c5579" (UID: "972edb24-1cfb-4529-bc89-2bb9a89c5579"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.446143 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "972edb24-1cfb-4529-bc89-2bb9a89c5579" (UID: "972edb24-1cfb-4529-bc89-2bb9a89c5579"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.454698 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "972edb24-1cfb-4529-bc89-2bb9a89c5579" (UID: "972edb24-1cfb-4529-bc89-2bb9a89c5579"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.457078 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "972edb24-1cfb-4529-bc89-2bb9a89c5579" (UID: "972edb24-1cfb-4529-bc89-2bb9a89c5579"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.459103 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "972edb24-1cfb-4529-bc89-2bb9a89c5579" (UID: "972edb24-1cfb-4529-bc89-2bb9a89c5579"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.489119 4914 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.489154 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.489164 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.489174 4914 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.489182 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/972edb24-1cfb-4529-bc89-2bb9a89c5579-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.489191 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dzdf\" (UniqueName: \"kubernetes.io/projected/972edb24-1cfb-4529-bc89-2bb9a89c5579-kube-api-access-4dzdf\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.716065 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:05:39 crc kubenswrapper[4914]: E0127 14:05:39.716469 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="972edb24-1cfb-4529-bc89-2bb9a89c5579" containerName="init" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.716499 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="972edb24-1cfb-4529-bc89-2bb9a89c5579" containerName="init" Jan 27 14:05:39 crc kubenswrapper[4914]: E0127 14:05:39.716516 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c325fe3-0885-4dfb-b83f-525bc610fe16" containerName="barbican-keystone-listener-log" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.716523 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c325fe3-0885-4dfb-b83f-525bc610fe16" containerName="barbican-keystone-listener-log" Jan 27 14:05:39 crc kubenswrapper[4914]: E0127 14:05:39.716533 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2" containerName="barbican-worker-log" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.716540 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2" containerName="barbican-worker-log" Jan 27 14:05:39 crc kubenswrapper[4914]: E0127 14:05:39.716549 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="972edb24-1cfb-4529-bc89-2bb9a89c5579" containerName="dnsmasq-dns" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.716554 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="972edb24-1cfb-4529-bc89-2bb9a89c5579" containerName="dnsmasq-dns" Jan 27 14:05:39 crc kubenswrapper[4914]: E0127 14:05:39.716570 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131bae56-5108-4750-8056-68133598a109" containerName="cinder-db-sync" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.716576 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="131bae56-5108-4750-8056-68133598a109" containerName="cinder-db-sync" Jan 27 14:05:39 crc kubenswrapper[4914]: E0127 14:05:39.716589 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c325fe3-0885-4dfb-b83f-525bc610fe16" containerName="barbican-keystone-listener" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.716595 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c325fe3-0885-4dfb-b83f-525bc610fe16" containerName="barbican-keystone-listener" Jan 27 14:05:39 crc kubenswrapper[4914]: E0127 14:05:39.716608 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2" containerName="barbican-worker" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.716613 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2" containerName="barbican-worker" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.716810 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c325fe3-0885-4dfb-b83f-525bc610fe16" containerName="barbican-keystone-listener" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.716852 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2" containerName="barbican-worker" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.716865 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c325fe3-0885-4dfb-b83f-525bc610fe16" containerName="barbican-keystone-listener-log" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.716879 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="d54eb659-44d2-4cdb-b9ab-3f7b2930bfd2" containerName="barbican-worker-log" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.716894 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="972edb24-1cfb-4529-bc89-2bb9a89c5579" containerName="dnsmasq-dns" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.716905 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="131bae56-5108-4750-8056-68133598a109" containerName="cinder-db-sync" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.717817 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.722817 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.723285 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.723557 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.736659 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-flhhv" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.737369 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.780580 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-drd5h"] Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.782294 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.796585 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-drd5h"] Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.894982 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.895048 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.895089 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.895117 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-scripts\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.895137 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6fgs\" (UniqueName: \"kubernetes.io/projected/6cb7fe78-4222-4e97-9b06-62e24491812a-kube-api-access-q6fgs\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.895170 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n89bn\" (UniqueName: \"kubernetes.io/projected/ab2a075c-06a5-4864-95e9-d973906bc0c6-kube-api-access-n89bn\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.895196 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.895214 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab2a075c-06a5-4864-95e9-d973906bc0c6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.895236 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-config\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.895257 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-config-data\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.895376 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.895409 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.906115 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec53709e-df2b-4fc9-b9ac-6e144a262455","Type":"ContainerStarted","Data":"7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13"} Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.906304 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec53709e-df2b-4fc9-b9ac-6e144a262455" containerName="sg-core" containerID="cri-o://3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547" gracePeriod=30 Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.906578 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.906922 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec53709e-df2b-4fc9-b9ac-6e144a262455" containerName="proxy-httpd" containerID="cri-o://7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13" gracePeriod=30 Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.908050 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.912512 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.916478 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.918725 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.922060 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.922123 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-fbv2k" event={"ID":"972edb24-1cfb-4529-bc89-2bb9a89c5579","Type":"ContainerDied","Data":"3713cab33a152fa921aa68b2c20b0f6aa6a89c53758ad00e47602d21c21e2aac"} Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.922170 4914 scope.go:117] "RemoveContainer" containerID="4c77ef4f56b091564b71e02baa818192b820a2b689530436a9b6b957dce2bee2" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.927790 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bb7f895-7ftxk" event={"ID":"41af7c55-6669-403f-b4cd-e62be1cd1db7","Type":"ContainerStarted","Data":"570edc130495a01cac24482f6ca01a7b3224c9f3c7d3637976c3592d6da75559"} Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.928018 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.968194 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64bb7f895-7ftxk" podStartSLOduration=10.968176421 podStartE2EDuration="10.968176421s" podCreationTimestamp="2026-01-27 14:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:05:39.96593372 +0000 UTC m=+1298.278283805" watchObservedRunningTime="2026-01-27 14:05:39.968176421 +0000 UTC m=+1298.280526506" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.987324 4914 scope.go:117] "RemoveContainer" containerID="cabdc311e35f003f50386267c040ef94967edbdcd18341e1490798a55d210622" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998118 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72439b1e-f3fd-438a-be94-a5663bcefb49-etc-machine-id\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998155 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998179 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998208 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-scripts\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998228 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72439b1e-f3fd-438a-be94-a5663bcefb49-logs\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998255 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6fgs\" (UniqueName: \"kubernetes.io/projected/6cb7fe78-4222-4e97-9b06-62e24491812a-kube-api-access-q6fgs\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998287 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n89bn\" (UniqueName: \"kubernetes.io/projected/ab2a075c-06a5-4864-95e9-d973906bc0c6-kube-api-access-n89bn\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998302 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-config-data-custom\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998320 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-config-data\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998348 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998382 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab2a075c-06a5-4864-95e9-d973906bc0c6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998417 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k7mc\" (UniqueName: \"kubernetes.io/projected/72439b1e-f3fd-438a-be94-a5663bcefb49-kube-api-access-7k7mc\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998438 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-config\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998460 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-config-data\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998485 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998511 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-scripts\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998566 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998589 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.998628 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.999494 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:39 crc kubenswrapper[4914]: I0127 14:05:39.999967 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab2a075c-06a5-4864-95e9-d973906bc0c6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.000497 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-config\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.002743 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.003369 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.004070 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.005774 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-config-data\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.006285 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-scripts\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.007346 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.007630 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.019928 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n89bn\" (UniqueName: \"kubernetes.io/projected/ab2a075c-06a5-4864-95e9-d973906bc0c6-kube-api-access-n89bn\") pod \"cinder-scheduler-0\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.022378 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-fbv2k"] Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.033175 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.035294 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-fbv2k"] Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.036406 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6fgs\" (UniqueName: \"kubernetes.io/projected/6cb7fe78-4222-4e97-9b06-62e24491812a-kube-api-access-q6fgs\") pod \"dnsmasq-dns-75bfc9b94f-drd5h\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.099590 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72439b1e-f3fd-438a-be94-a5663bcefb49-etc-machine-id\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.099639 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.099682 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72439b1e-f3fd-438a-be94-a5663bcefb49-logs\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.099690 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72439b1e-f3fd-438a-be94-a5663bcefb49-etc-machine-id\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.099742 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-config-data-custom\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.099775 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-config-data\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.099813 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k7mc\" (UniqueName: \"kubernetes.io/projected/72439b1e-f3fd-438a-be94-a5663bcefb49-kube-api-access-7k7mc\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.099877 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-scripts\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.100436 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72439b1e-f3fd-438a-be94-a5663bcefb49-logs\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.104036 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-config-data\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.104061 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.104587 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-config-data-custom\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.105287 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-scripts\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.114458 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.121451 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k7mc\" (UniqueName: \"kubernetes.io/projected/72439b1e-f3fd-438a-be94-a5663bcefb49-kube-api-access-7k7mc\") pod \"cinder-api-0\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " pod="openstack/cinder-api-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.261739 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.322638 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="972edb24-1cfb-4529-bc89-2bb9a89c5579" path="/var/lib/kubelet/pods/972edb24-1cfb-4529-bc89-2bb9a89c5579/volumes" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.502955 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.606922 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.710677 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-sg-core-conf-yaml\") pod \"ec53709e-df2b-4fc9-b9ac-6e144a262455\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.710750 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-combined-ca-bundle\") pod \"ec53709e-df2b-4fc9-b9ac-6e144a262455\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.710808 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-config-data\") pod \"ec53709e-df2b-4fc9-b9ac-6e144a262455\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.710868 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec53709e-df2b-4fc9-b9ac-6e144a262455-log-httpd\") pod \"ec53709e-df2b-4fc9-b9ac-6e144a262455\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.710910 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-scripts\") pod \"ec53709e-df2b-4fc9-b9ac-6e144a262455\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.710958 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrldw\" (UniqueName: \"kubernetes.io/projected/ec53709e-df2b-4fc9-b9ac-6e144a262455-kube-api-access-mrldw\") pod \"ec53709e-df2b-4fc9-b9ac-6e144a262455\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.710995 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec53709e-df2b-4fc9-b9ac-6e144a262455-run-httpd\") pod \"ec53709e-df2b-4fc9-b9ac-6e144a262455\" (UID: \"ec53709e-df2b-4fc9-b9ac-6e144a262455\") " Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.711789 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec53709e-df2b-4fc9-b9ac-6e144a262455-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ec53709e-df2b-4fc9-b9ac-6e144a262455" (UID: "ec53709e-df2b-4fc9-b9ac-6e144a262455"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.712295 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec53709e-df2b-4fc9-b9ac-6e144a262455-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ec53709e-df2b-4fc9-b9ac-6e144a262455" (UID: "ec53709e-df2b-4fc9-b9ac-6e144a262455"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.716542 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec53709e-df2b-4fc9-b9ac-6e144a262455-kube-api-access-mrldw" (OuterVolumeSpecName: "kube-api-access-mrldw") pod "ec53709e-df2b-4fc9-b9ac-6e144a262455" (UID: "ec53709e-df2b-4fc9-b9ac-6e144a262455"). InnerVolumeSpecName "kube-api-access-mrldw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.740202 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ec53709e-df2b-4fc9-b9ac-6e144a262455" (UID: "ec53709e-df2b-4fc9-b9ac-6e144a262455"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.751168 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-scripts" (OuterVolumeSpecName: "scripts") pod "ec53709e-df2b-4fc9-b9ac-6e144a262455" (UID: "ec53709e-df2b-4fc9-b9ac-6e144a262455"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.768251 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-drd5h"] Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.789972 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec53709e-df2b-4fc9-b9ac-6e144a262455" (UID: "ec53709e-df2b-4fc9-b9ac-6e144a262455"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.815224 4914 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.815261 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.815273 4914 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec53709e-df2b-4fc9-b9ac-6e144a262455-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.815288 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.815299 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrldw\" (UniqueName: \"kubernetes.io/projected/ec53709e-df2b-4fc9-b9ac-6e144a262455-kube-api-access-mrldw\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.815311 4914 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec53709e-df2b-4fc9-b9ac-6e144a262455-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.839581 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-config-data" (OuterVolumeSpecName: "config-data") pod "ec53709e-df2b-4fc9-b9ac-6e144a262455" (UID: "ec53709e-df2b-4fc9-b9ac-6e144a262455"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.878669 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.917139 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec53709e-df2b-4fc9-b9ac-6e144a262455-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.938594 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab2a075c-06a5-4864-95e9-d973906bc0c6","Type":"ContainerStarted","Data":"223141cf4491e1a19c66fda459e08487f138d0c24873701a3e6572bfbf1782d0"} Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.939690 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" event={"ID":"6cb7fe78-4222-4e97-9b06-62e24491812a","Type":"ContainerStarted","Data":"cc647c470ad9e612c519cf7d9cfff299319b843f30cd275cdccece4181fe1210"} Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.941305 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72439b1e-f3fd-438a-be94-a5663bcefb49","Type":"ContainerStarted","Data":"44396b80569aa62371aed6895d9bcf42382ddfe140d651bb6e8013c0e2de32c8"} Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.943278 4914 generic.go:334] "Generic (PLEG): container finished" podID="b004d6c6-2289-4b0c-8779-ab5a36e853ee" containerID="950934a28b67a4a5b0a56b08f76e6500884ccce199b3e557cd45b2444f12e931" exitCode=0 Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.943339 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78d766f864-q9f4n" event={"ID":"b004d6c6-2289-4b0c-8779-ab5a36e853ee","Type":"ContainerDied","Data":"950934a28b67a4a5b0a56b08f76e6500884ccce199b3e557cd45b2444f12e931"} Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.945312 4914 generic.go:334] "Generic (PLEG): container finished" podID="ec53709e-df2b-4fc9-b9ac-6e144a262455" containerID="7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13" exitCode=0 Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.945343 4914 generic.go:334] "Generic (PLEG): container finished" podID="ec53709e-df2b-4fc9-b9ac-6e144a262455" containerID="3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547" exitCode=2 Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.945374 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.945510 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec53709e-df2b-4fc9-b9ac-6e144a262455","Type":"ContainerDied","Data":"7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13"} Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.945583 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec53709e-df2b-4fc9-b9ac-6e144a262455","Type":"ContainerDied","Data":"3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547"} Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.945686 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec53709e-df2b-4fc9-b9ac-6e144a262455","Type":"ContainerDied","Data":"9cfb0d219fc3bc6aebcdb73c7b314e6b7d5698e507f87519359a9c4607cd8d0d"} Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.945666 4914 scope.go:117] "RemoveContainer" containerID="7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13" Jan 27 14:05:40 crc kubenswrapper[4914]: I0127 14:05:40.978358 4914 scope.go:117] "RemoveContainer" containerID="3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.069536 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.070154 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.084615 4914 scope.go:117] "RemoveContainer" containerID="7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13" Jan 27 14:05:41 crc kubenswrapper[4914]: E0127 14:05:41.086235 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13\": container with ID starting with 7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13 not found: ID does not exist" containerID="7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.086273 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13"} err="failed to get container status \"7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13\": rpc error: code = NotFound desc = could not find container \"7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13\": container with ID starting with 7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13 not found: ID does not exist" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.086304 4914 scope.go:117] "RemoveContainer" containerID="3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547" Jan 27 14:05:41 crc kubenswrapper[4914]: E0127 14:05:41.090253 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547\": container with ID starting with 3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547 not found: ID does not exist" containerID="3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.090567 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547"} err="failed to get container status \"3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547\": rpc error: code = NotFound desc = could not find container \"3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547\": container with ID starting with 3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547 not found: ID does not exist" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.090596 4914 scope.go:117] "RemoveContainer" containerID="7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.090951 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13"} err="failed to get container status \"7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13\": rpc error: code = NotFound desc = could not find container \"7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13\": container with ID starting with 7e5f03b5b652113c4740938ed1ddd8615e484eb16a7bbdcd65f61cae6ba95f13 not found: ID does not exist" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.090978 4914 scope.go:117] "RemoveContainer" containerID="3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.091226 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547"} err="failed to get container status \"3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547\": rpc error: code = NotFound desc = could not find container \"3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547\": container with ID starting with 3bfa9a52691cb78ec29adecf9d306fe1693aa7b19457ad519d470c5b92f55547 not found: ID does not exist" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.093976 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.144182 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:05:41 crc kubenswrapper[4914]: E0127 14:05:41.144677 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b004d6c6-2289-4b0c-8779-ab5a36e853ee" containerName="barbican-api-log" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.144694 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="b004d6c6-2289-4b0c-8779-ab5a36e853ee" containerName="barbican-api-log" Jan 27 14:05:41 crc kubenswrapper[4914]: E0127 14:05:41.144723 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec53709e-df2b-4fc9-b9ac-6e144a262455" containerName="sg-core" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.144731 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec53709e-df2b-4fc9-b9ac-6e144a262455" containerName="sg-core" Jan 27 14:05:41 crc kubenswrapper[4914]: E0127 14:05:41.144750 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b004d6c6-2289-4b0c-8779-ab5a36e853ee" containerName="barbican-api" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.144758 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="b004d6c6-2289-4b0c-8779-ab5a36e853ee" containerName="barbican-api" Jan 27 14:05:41 crc kubenswrapper[4914]: E0127 14:05:41.144779 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec53709e-df2b-4fc9-b9ac-6e144a262455" containerName="proxy-httpd" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.144786 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec53709e-df2b-4fc9-b9ac-6e144a262455" containerName="proxy-httpd" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.144987 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec53709e-df2b-4fc9-b9ac-6e144a262455" containerName="proxy-httpd" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.145004 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="b004d6c6-2289-4b0c-8779-ab5a36e853ee" containerName="barbican-api-log" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.145016 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec53709e-df2b-4fc9-b9ac-6e144a262455" containerName="sg-core" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.145039 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="b004d6c6-2289-4b0c-8779-ab5a36e853ee" containerName="barbican-api" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.149021 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.158383 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.170490 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.182909 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.230348 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b004d6c6-2289-4b0c-8779-ab5a36e853ee-logs\") pod \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.230467 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-config-data\") pod \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.230547 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-combined-ca-bundle\") pod \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.230591 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-config-data-custom\") pod \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.230616 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnq6m\" (UniqueName: \"kubernetes.io/projected/b004d6c6-2289-4b0c-8779-ab5a36e853ee-kube-api-access-nnq6m\") pod \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\" (UID: \"b004d6c6-2289-4b0c-8779-ab5a36e853ee\") " Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.244299 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b004d6c6-2289-4b0c-8779-ab5a36e853ee-logs" (OuterVolumeSpecName: "logs") pod "b004d6c6-2289-4b0c-8779-ab5a36e853ee" (UID: "b004d6c6-2289-4b0c-8779-ab5a36e853ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.322871 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b004d6c6-2289-4b0c-8779-ab5a36e853ee-kube-api-access-nnq6m" (OuterVolumeSpecName: "kube-api-access-nnq6m") pod "b004d6c6-2289-4b0c-8779-ab5a36e853ee" (UID: "b004d6c6-2289-4b0c-8779-ab5a36e853ee"). InnerVolumeSpecName "kube-api-access-nnq6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.322929 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b004d6c6-2289-4b0c-8779-ab5a36e853ee" (UID: "b004d6c6-2289-4b0c-8779-ab5a36e853ee"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.332784 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.332887 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-scripts\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.332908 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.332967 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-run-httpd\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.332995 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-log-httpd\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.333015 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fhp6\" (UniqueName: \"kubernetes.io/projected/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-kube-api-access-4fhp6\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.333046 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-config-data\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.333098 4914 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.333124 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnq6m\" (UniqueName: \"kubernetes.io/projected/b004d6c6-2289-4b0c-8779-ab5a36e853ee-kube-api-access-nnq6m\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.333133 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b004d6c6-2289-4b0c-8779-ab5a36e853ee-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.350647 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b004d6c6-2289-4b0c-8779-ab5a36e853ee" (UID: "b004d6c6-2289-4b0c-8779-ab5a36e853ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.381857 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-config-data" (OuterVolumeSpecName: "config-data") pod "b004d6c6-2289-4b0c-8779-ab5a36e853ee" (UID: "b004d6c6-2289-4b0c-8779-ab5a36e853ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.434975 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-run-httpd\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.435066 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-log-httpd\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.435107 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fhp6\" (UniqueName: \"kubernetes.io/projected/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-kube-api-access-4fhp6\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.435186 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-config-data\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.435263 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.435316 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-scripts\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.435337 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.435459 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.435476 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b004d6c6-2289-4b0c-8779-ab5a36e853ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.436224 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-run-httpd\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.436740 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-log-httpd\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.444291 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.444999 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-config-data\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.449476 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-scripts\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.454729 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.465308 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fhp6\" (UniqueName: \"kubernetes.io/projected/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-kube-api-access-4fhp6\") pod \"ceilometer-0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.526992 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.793653 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.831121 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:05:41 crc kubenswrapper[4914]: W0127 14:05:41.850150 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbc3e2d6_99a8_4002_ae3e_f7fb4edc06f0.slice/crio-5da504e0d68ca989f1ced31cbd88ddd877b0ba0cd68eba696730815de8f3d411 WatchSource:0}: Error finding container 5da504e0d68ca989f1ced31cbd88ddd877b0ba0cd68eba696730815de8f3d411: Status 404 returned error can't find the container with id 5da504e0d68ca989f1ced31cbd88ddd877b0ba0cd68eba696730815de8f3d411 Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.976102 4914 generic.go:334] "Generic (PLEG): container finished" podID="6cb7fe78-4222-4e97-9b06-62e24491812a" containerID="3e8f3e5eb47a182dceeb12a2ac6194eef2238e658ebcbe8dae48ceb4d691ed56" exitCode=0 Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.976330 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" event={"ID":"6cb7fe78-4222-4e97-9b06-62e24491812a","Type":"ContainerDied","Data":"3e8f3e5eb47a182dceeb12a2ac6194eef2238e658ebcbe8dae48ceb4d691ed56"} Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.984665 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78d766f864-q9f4n" event={"ID":"b004d6c6-2289-4b0c-8779-ab5a36e853ee","Type":"ContainerDied","Data":"e4c6eec45ff742f0564138c8cc43bdf404442dde9becc4b8c7c4d90f74b00ee6"} Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.984717 4914 scope.go:117] "RemoveContainer" containerID="950934a28b67a4a5b0a56b08f76e6500884ccce199b3e557cd45b2444f12e931" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.984680 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78d766f864-q9f4n" Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.988197 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72439b1e-f3fd-438a-be94-a5663bcefb49","Type":"ContainerStarted","Data":"3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96"} Jan 27 14:05:41 crc kubenswrapper[4914]: I0127 14:05:41.990883 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0","Type":"ContainerStarted","Data":"5da504e0d68ca989f1ced31cbd88ddd877b0ba0cd68eba696730815de8f3d411"} Jan 27 14:05:42 crc kubenswrapper[4914]: I0127 14:05:42.029930 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78d766f864-q9f4n"] Jan 27 14:05:42 crc kubenswrapper[4914]: I0127 14:05:42.035189 4914 scope.go:117] "RemoveContainer" containerID="fc8a11d54a876b3fb4ee305d207fd66e4bd1d465f7d67c2dfc4b843b52319df4" Jan 27 14:05:42 crc kubenswrapper[4914]: I0127 14:05:42.041596 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-78d766f864-q9f4n"] Jan 27 14:05:42 crc kubenswrapper[4914]: I0127 14:05:42.318340 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b004d6c6-2289-4b0c-8779-ab5a36e853ee" path="/var/lib/kubelet/pods/b004d6c6-2289-4b0c-8779-ab5a36e853ee/volumes" Jan 27 14:05:42 crc kubenswrapper[4914]: I0127 14:05:42.320562 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec53709e-df2b-4fc9-b9ac-6e144a262455" path="/var/lib/kubelet/pods/ec53709e-df2b-4fc9-b9ac-6e144a262455/volumes" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.008312 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" event={"ID":"6cb7fe78-4222-4e97-9b06-62e24491812a","Type":"ContainerStarted","Data":"cd66fdab56486629a9804f6a1bf54a8d1f0c8379c0878301a9eb10b735a5eadf"} Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.008647 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.010577 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72439b1e-f3fd-438a-be94-a5663bcefb49","Type":"ContainerStarted","Data":"3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668"} Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.010746 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="72439b1e-f3fd-438a-be94-a5663bcefb49" containerName="cinder-api-log" containerID="cri-o://3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96" gracePeriod=30 Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.011063 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.011106 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="72439b1e-f3fd-438a-be94-a5663bcefb49" containerName="cinder-api" containerID="cri-o://3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668" gracePeriod=30 Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.032190 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" podStartSLOduration=4.032165902 podStartE2EDuration="4.032165902s" podCreationTimestamp="2026-01-27 14:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:05:43.030551309 +0000 UTC m=+1301.342901404" watchObservedRunningTime="2026-01-27 14:05:43.032165902 +0000 UTC m=+1301.344516007" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.117224 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.151183 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.151159294 podStartE2EDuration="4.151159294s" podCreationTimestamp="2026-01-27 14:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:05:43.052186541 +0000 UTC m=+1301.364536656" watchObservedRunningTime="2026-01-27 14:05:43.151159294 +0000 UTC m=+1301.463509379" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.269016 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.667074 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.791215 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k7mc\" (UniqueName: \"kubernetes.io/projected/72439b1e-f3fd-438a-be94-a5663bcefb49-kube-api-access-7k7mc\") pod \"72439b1e-f3fd-438a-be94-a5663bcefb49\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.791618 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-scripts\") pod \"72439b1e-f3fd-438a-be94-a5663bcefb49\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.791679 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-config-data\") pod \"72439b1e-f3fd-438a-be94-a5663bcefb49\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.791726 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-combined-ca-bundle\") pod \"72439b1e-f3fd-438a-be94-a5663bcefb49\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.791809 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72439b1e-f3fd-438a-be94-a5663bcefb49-etc-machine-id\") pod \"72439b1e-f3fd-438a-be94-a5663bcefb49\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.791981 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72439b1e-f3fd-438a-be94-a5663bcefb49-logs\") pod \"72439b1e-f3fd-438a-be94-a5663bcefb49\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.792096 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-config-data-custom\") pod \"72439b1e-f3fd-438a-be94-a5663bcefb49\" (UID: \"72439b1e-f3fd-438a-be94-a5663bcefb49\") " Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.792104 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72439b1e-f3fd-438a-be94-a5663bcefb49-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "72439b1e-f3fd-438a-be94-a5663bcefb49" (UID: "72439b1e-f3fd-438a-be94-a5663bcefb49"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.792473 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72439b1e-f3fd-438a-be94-a5663bcefb49-logs" (OuterVolumeSpecName: "logs") pod "72439b1e-f3fd-438a-be94-a5663bcefb49" (UID: "72439b1e-f3fd-438a-be94-a5663bcefb49"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.792880 4914 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72439b1e-f3fd-438a-be94-a5663bcefb49-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.792908 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72439b1e-f3fd-438a-be94-a5663bcefb49-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.797860 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "72439b1e-f3fd-438a-be94-a5663bcefb49" (UID: "72439b1e-f3fd-438a-be94-a5663bcefb49"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.797982 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-scripts" (OuterVolumeSpecName: "scripts") pod "72439b1e-f3fd-438a-be94-a5663bcefb49" (UID: "72439b1e-f3fd-438a-be94-a5663bcefb49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.798795 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72439b1e-f3fd-438a-be94-a5663bcefb49-kube-api-access-7k7mc" (OuterVolumeSpecName: "kube-api-access-7k7mc") pod "72439b1e-f3fd-438a-be94-a5663bcefb49" (UID: "72439b1e-f3fd-438a-be94-a5663bcefb49"). InnerVolumeSpecName "kube-api-access-7k7mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.829088 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72439b1e-f3fd-438a-be94-a5663bcefb49" (UID: "72439b1e-f3fd-438a-be94-a5663bcefb49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.867566 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-config-data" (OuterVolumeSpecName: "config-data") pod "72439b1e-f3fd-438a-be94-a5663bcefb49" (UID: "72439b1e-f3fd-438a-be94-a5663bcefb49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.894286 4914 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.894537 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k7mc\" (UniqueName: \"kubernetes.io/projected/72439b1e-f3fd-438a-be94-a5663bcefb49-kube-api-access-7k7mc\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.894617 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.894682 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:43 crc kubenswrapper[4914]: I0127 14:05:43.894743 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72439b1e-f3fd-438a-be94-a5663bcefb49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.021344 4914 generic.go:334] "Generic (PLEG): container finished" podID="72439b1e-f3fd-438a-be94-a5663bcefb49" containerID="3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668" exitCode=0 Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.021383 4914 generic.go:334] "Generic (PLEG): container finished" podID="72439b1e-f3fd-438a-be94-a5663bcefb49" containerID="3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96" exitCode=143 Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.021436 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72439b1e-f3fd-438a-be94-a5663bcefb49","Type":"ContainerDied","Data":"3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668"} Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.021470 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72439b1e-f3fd-438a-be94-a5663bcefb49","Type":"ContainerDied","Data":"3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96"} Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.021483 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72439b1e-f3fd-438a-be94-a5663bcefb49","Type":"ContainerDied","Data":"44396b80569aa62371aed6895d9bcf42382ddfe140d651bb6e8013c0e2de32c8"} Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.021515 4914 scope.go:117] "RemoveContainer" containerID="3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.021661 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.028912 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0","Type":"ContainerStarted","Data":"d529544a108a14274660565d4308a8cdf8c8eb0f9c3ba9367d41eb505adb93f7"} Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.056152 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.056497 4914 scope.go:117] "RemoveContainer" containerID="3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.064949 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.091786 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:05:44 crc kubenswrapper[4914]: E0127 14:05:44.092155 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72439b1e-f3fd-438a-be94-a5663bcefb49" containerName="cinder-api" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.092173 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="72439b1e-f3fd-438a-be94-a5663bcefb49" containerName="cinder-api" Jan 27 14:05:44 crc kubenswrapper[4914]: E0127 14:05:44.092185 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72439b1e-f3fd-438a-be94-a5663bcefb49" containerName="cinder-api-log" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.092191 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="72439b1e-f3fd-438a-be94-a5663bcefb49" containerName="cinder-api-log" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.092373 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="72439b1e-f3fd-438a-be94-a5663bcefb49" containerName="cinder-api" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.092387 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="72439b1e-f3fd-438a-be94-a5663bcefb49" containerName="cinder-api-log" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.093511 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.100758 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.101177 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.101412 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.104077 4914 scope.go:117] "RemoveContainer" containerID="3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.105668 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:05:44 crc kubenswrapper[4914]: E0127 14:05:44.106783 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668\": container with ID starting with 3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668 not found: ID does not exist" containerID="3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.106848 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668"} err="failed to get container status \"3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668\": rpc error: code = NotFound desc = could not find container \"3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668\": container with ID starting with 3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668 not found: ID does not exist" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.106877 4914 scope.go:117] "RemoveContainer" containerID="3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96" Jan 27 14:05:44 crc kubenswrapper[4914]: E0127 14:05:44.107283 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96\": container with ID starting with 3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96 not found: ID does not exist" containerID="3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.107311 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96"} err="failed to get container status \"3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96\": rpc error: code = NotFound desc = could not find container \"3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96\": container with ID starting with 3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96 not found: ID does not exist" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.107326 4914 scope.go:117] "RemoveContainer" containerID="3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.110962 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668"} err="failed to get container status \"3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668\": rpc error: code = NotFound desc = could not find container \"3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668\": container with ID starting with 3daa400791aa8ec065b8beb7721573f6ca92ea80b2aa52c7355f10add9bd4668 not found: ID does not exist" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.111002 4914 scope.go:117] "RemoveContainer" containerID="3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.111456 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96"} err="failed to get container status \"3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96\": rpc error: code = NotFound desc = could not find container \"3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96\": container with ID starting with 3d81e04bf162d8f3844ca1f4b35715c5e2b5079828b0207fd5aac76e9d877e96 not found: ID does not exist" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.205062 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.205150 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4dd99d95-c640-4bb1-ab91-b5415689764b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.205169 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-scripts\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.205209 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbc7f\" (UniqueName: \"kubernetes.io/projected/4dd99d95-c640-4bb1-ab91-b5415689764b-kube-api-access-bbc7f\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.205225 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-config-data-custom\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.205250 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-config-data\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.205373 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.205532 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dd99d95-c640-4bb1-ab91-b5415689764b-logs\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.205663 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.303535 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72439b1e-f3fd-438a-be94-a5663bcefb49" path="/var/lib/kubelet/pods/72439b1e-f3fd-438a-be94-a5663bcefb49/volumes" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.306803 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4dd99d95-c640-4bb1-ab91-b5415689764b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.307077 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4dd99d95-c640-4bb1-ab91-b5415689764b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.307144 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-scripts\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.307301 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-config-data-custom\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.307372 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbc7f\" (UniqueName: \"kubernetes.io/projected/4dd99d95-c640-4bb1-ab91-b5415689764b-kube-api-access-bbc7f\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.307468 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-config-data\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.307623 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.307790 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dd99d95-c640-4bb1-ab91-b5415689764b-logs\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.308064 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.308209 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.311225 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dd99d95-c640-4bb1-ab91-b5415689764b-logs\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.311735 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-scripts\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.312232 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.312751 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-config-data\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.313669 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.315783 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.312464 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-config-data-custom\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.327493 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbc7f\" (UniqueName: \"kubernetes.io/projected/4dd99d95-c640-4bb1-ab91-b5415689764b-kube-api-access-bbc7f\") pod \"cinder-api-0\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.425219 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:05:44 crc kubenswrapper[4914]: I0127 14:05:44.941184 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:05:45 crc kubenswrapper[4914]: I0127 14:05:45.040274 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6bb6c77c5d-pwr6c" Jan 27 14:05:45 crc kubenswrapper[4914]: I0127 14:05:45.059342 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4dd99d95-c640-4bb1-ab91-b5415689764b","Type":"ContainerStarted","Data":"1efe3053add5d1e512258e04b402fa99c0555582724a44e406c30e5aa141ebf5"} Jan 27 14:05:45 crc kubenswrapper[4914]: I0127 14:05:45.077305 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0","Type":"ContainerStarted","Data":"77c0559587084688e5b8def1b291cbbae0217cd025267c81aeb6e4eef954dd62"} Jan 27 14:05:45 crc kubenswrapper[4914]: I0127 14:05:45.154239 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75b4645c86-9r9q2"] Jan 27 14:05:45 crc kubenswrapper[4914]: I0127 14:05:45.154462 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75b4645c86-9r9q2" podUID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" containerName="horizon-log" containerID="cri-o://ad832d0feb989590a5314d3754c43edf28a65278eb9fabcffc6d095749855113" gracePeriod=30 Jan 27 14:05:45 crc kubenswrapper[4914]: I0127 14:05:45.154567 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75b4645c86-9r9q2" podUID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" containerName="horizon" containerID="cri-o://9b6322b0157303af4f92615415e427acec270b33985cd6111ea50a4b30ddc85c" gracePeriod=30 Jan 27 14:05:45 crc kubenswrapper[4914]: I0127 14:05:45.175690 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75b4645c86-9r9q2" podUID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 27 14:05:46 crc kubenswrapper[4914]: I0127 14:05:46.088768 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0","Type":"ContainerStarted","Data":"0b4fe6a455ec73498c3e4b0387aed9a6ec57a320dfeda2c8d6cc3ee0b5b12782"} Jan 27 14:05:46 crc kubenswrapper[4914]: I0127 14:05:46.091576 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4dd99d95-c640-4bb1-ab91-b5415689764b","Type":"ContainerStarted","Data":"bb8b8da350600f27f8928fd62aeb511f7c81a4b9c3e9b73389552c0153506c5e"} Jan 27 14:05:47 crc kubenswrapper[4914]: I0127 14:05:47.104770 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4dd99d95-c640-4bb1-ab91-b5415689764b","Type":"ContainerStarted","Data":"9d16b8915a376fc77b276a361cb7b2b796786b283022e1ca9d05600f37764706"} Jan 27 14:05:47 crc kubenswrapper[4914]: I0127 14:05:47.105368 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 14:05:47 crc kubenswrapper[4914]: I0127 14:05:47.110741 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab2a075c-06a5-4864-95e9-d973906bc0c6","Type":"ContainerStarted","Data":"c386a6d4dfb728f38ecd7278714db242195140c16a55943c192511d047516a19"} Jan 27 14:05:47 crc kubenswrapper[4914]: I0127 14:05:47.110777 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab2a075c-06a5-4864-95e9-d973906bc0c6","Type":"ContainerStarted","Data":"ee07a18e3deca062504a1503ca99b51c73bd894c734d2f385b85cd098eb61cd0"} Jan 27 14:05:47 crc kubenswrapper[4914]: I0127 14:05:47.138345 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.138302554 podStartE2EDuration="3.138302554s" podCreationTimestamp="2026-01-27 14:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:05:47.130634344 +0000 UTC m=+1305.442984449" watchObservedRunningTime="2026-01-27 14:05:47.138302554 +0000 UTC m=+1305.450652639" Jan 27 14:05:48 crc kubenswrapper[4914]: I0127 14:05:48.557857 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75b4645c86-9r9q2" podUID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:41624->10.217.0.150:8443: read: connection reset by peer" Jan 27 14:05:49 crc kubenswrapper[4914]: I0127 14:05:49.131617 4914 generic.go:334] "Generic (PLEG): container finished" podID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" containerID="9b6322b0157303af4f92615415e427acec270b33985cd6111ea50a4b30ddc85c" exitCode=0 Jan 27 14:05:49 crc kubenswrapper[4914]: I0127 14:05:49.131752 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75b4645c86-9r9q2" event={"ID":"1dd59938-4cf8-4632-8b1c-237cf981fd5f","Type":"ContainerDied","Data":"9b6322b0157303af4f92615415e427acec270b33985cd6111ea50a4b30ddc85c"} Jan 27 14:05:50 crc kubenswrapper[4914]: I0127 14:05:50.033536 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 14:05:50 crc kubenswrapper[4914]: I0127 14:05:50.116936 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:05:50 crc kubenswrapper[4914]: I0127 14:05:50.154970 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.10171278 podStartE2EDuration="11.154949388s" podCreationTimestamp="2026-01-27 14:05:39 +0000 UTC" firstStartedPulling="2026-01-27 14:05:40.516885249 +0000 UTC m=+1298.829235334" lastFinishedPulling="2026-01-27 14:05:45.570121867 +0000 UTC m=+1303.882471942" observedRunningTime="2026-01-27 14:05:47.167750651 +0000 UTC m=+1305.480100756" watchObservedRunningTime="2026-01-27 14:05:50.154949388 +0000 UTC m=+1308.467299473" Jan 27 14:05:50 crc kubenswrapper[4914]: I0127 14:05:50.211960 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hqbkf"] Jan 27 14:05:50 crc kubenswrapper[4914]: I0127 14:05:50.212455 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" podUID="7d2aabe6-c9f1-4002-a2dd-468a56b1b6db" containerName="dnsmasq-dns" containerID="cri-o://fc9acc3b25f034775a4b813744ebdbe02604782a2b7f319204a657660dfede22" gracePeriod=10 Jan 27 14:05:50 crc kubenswrapper[4914]: I0127 14:05:50.818449 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:50 crc kubenswrapper[4914]: I0127 14:05:50.849480 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75b4645c86-9r9q2" podUID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 27 14:05:50 crc kubenswrapper[4914]: I0127 14:05:50.943099 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-dns-swift-storage-0\") pod \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " Jan 27 14:05:50 crc kubenswrapper[4914]: I0127 14:05:50.943145 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-config\") pod \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " Jan 27 14:05:50 crc kubenswrapper[4914]: I0127 14:05:50.943192 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfrgk\" (UniqueName: \"kubernetes.io/projected/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-kube-api-access-nfrgk\") pod \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " Jan 27 14:05:50 crc kubenswrapper[4914]: I0127 14:05:50.943212 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-ovsdbserver-nb\") pod \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " Jan 27 14:05:50 crc kubenswrapper[4914]: I0127 14:05:50.943313 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-dns-svc\") pod \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " Jan 27 14:05:50 crc kubenswrapper[4914]: I0127 14:05:50.943368 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-ovsdbserver-sb\") pod \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\" (UID: \"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db\") " Jan 27 14:05:50 crc kubenswrapper[4914]: I0127 14:05:50.947613 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-kube-api-access-nfrgk" (OuterVolumeSpecName: "kube-api-access-nfrgk") pod "7d2aabe6-c9f1-4002-a2dd-468a56b1b6db" (UID: "7d2aabe6-c9f1-4002-a2dd-468a56b1b6db"). InnerVolumeSpecName "kube-api-access-nfrgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.045745 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfrgk\" (UniqueName: \"kubernetes.io/projected/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-kube-api-access-nfrgk\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.092249 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7d2aabe6-c9f1-4002-a2dd-468a56b1b6db" (UID: "7d2aabe6-c9f1-4002-a2dd-468a56b1b6db"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.109537 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7d2aabe6-c9f1-4002-a2dd-468a56b1b6db" (UID: "7d2aabe6-c9f1-4002-a2dd-468a56b1b6db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.137728 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-config" (OuterVolumeSpecName: "config") pod "7d2aabe6-c9f1-4002-a2dd-468a56b1b6db" (UID: "7d2aabe6-c9f1-4002-a2dd-468a56b1b6db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.147727 4914 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.147763 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.147777 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.149562 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7d2aabe6-c9f1-4002-a2dd-468a56b1b6db" (UID: "7d2aabe6-c9f1-4002-a2dd-468a56b1b6db"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.170868 4914 generic.go:334] "Generic (PLEG): container finished" podID="7d2aabe6-c9f1-4002-a2dd-468a56b1b6db" containerID="fc9acc3b25f034775a4b813744ebdbe02604782a2b7f319204a657660dfede22" exitCode=0 Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.171030 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" event={"ID":"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db","Type":"ContainerDied","Data":"fc9acc3b25f034775a4b813744ebdbe02604782a2b7f319204a657660dfede22"} Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.171064 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" event={"ID":"7d2aabe6-c9f1-4002-a2dd-468a56b1b6db","Type":"ContainerDied","Data":"ad4ca58a2b0a464e55c6172fa5cdcdddae12b4821fb0c8870c974182dbb78c71"} Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.171095 4914 scope.go:117] "RemoveContainer" containerID="fc9acc3b25f034775a4b813744ebdbe02604782a2b7f319204a657660dfede22" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.172130 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-hqbkf" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.172522 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7d2aabe6-c9f1-4002-a2dd-468a56b1b6db" (UID: "7d2aabe6-c9f1-4002-a2dd-468a56b1b6db"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.180997 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0","Type":"ContainerStarted","Data":"580dd87b74cd74981095091785ee9e27784b7f0a0be14e913d939f32bd7c790d"} Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.181307 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.194943 4914 scope.go:117] "RemoveContainer" containerID="4889031c0a669378361f7087856dba6a105677abc62a91d5daf27538b0f9d973" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.208860 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.323595034 podStartE2EDuration="10.20882515s" podCreationTimestamp="2026-01-27 14:05:41 +0000 UTC" firstStartedPulling="2026-01-27 14:05:41.852721349 +0000 UTC m=+1300.165071434" lastFinishedPulling="2026-01-27 14:05:50.737951465 +0000 UTC m=+1309.050301550" observedRunningTime="2026-01-27 14:05:51.199220677 +0000 UTC m=+1309.511570762" watchObservedRunningTime="2026-01-27 14:05:51.20882515 +0000 UTC m=+1309.521175235" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.221607 4914 scope.go:117] "RemoveContainer" containerID="fc9acc3b25f034775a4b813744ebdbe02604782a2b7f319204a657660dfede22" Jan 27 14:05:51 crc kubenswrapper[4914]: E0127 14:05:51.222065 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc9acc3b25f034775a4b813744ebdbe02604782a2b7f319204a657660dfede22\": container with ID starting with fc9acc3b25f034775a4b813744ebdbe02604782a2b7f319204a657660dfede22 not found: ID does not exist" containerID="fc9acc3b25f034775a4b813744ebdbe02604782a2b7f319204a657660dfede22" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.222101 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc9acc3b25f034775a4b813744ebdbe02604782a2b7f319204a657660dfede22"} err="failed to get container status \"fc9acc3b25f034775a4b813744ebdbe02604782a2b7f319204a657660dfede22\": rpc error: code = NotFound desc = could not find container \"fc9acc3b25f034775a4b813744ebdbe02604782a2b7f319204a657660dfede22\": container with ID starting with fc9acc3b25f034775a4b813744ebdbe02604782a2b7f319204a657660dfede22 not found: ID does not exist" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.222128 4914 scope.go:117] "RemoveContainer" containerID="4889031c0a669378361f7087856dba6a105677abc62a91d5daf27538b0f9d973" Jan 27 14:05:51 crc kubenswrapper[4914]: E0127 14:05:51.222396 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4889031c0a669378361f7087856dba6a105677abc62a91d5daf27538b0f9d973\": container with ID starting with 4889031c0a669378361f7087856dba6a105677abc62a91d5daf27538b0f9d973 not found: ID does not exist" containerID="4889031c0a669378361f7087856dba6a105677abc62a91d5daf27538b0f9d973" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.222410 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4889031c0a669378361f7087856dba6a105677abc62a91d5daf27538b0f9d973"} err="failed to get container status \"4889031c0a669378361f7087856dba6a105677abc62a91d5daf27538b0f9d973\": rpc error: code = NotFound desc = could not find container \"4889031c0a669378361f7087856dba6a105677abc62a91d5daf27538b0f9d973\": container with ID starting with 4889031c0a669378361f7087856dba6a105677abc62a91d5daf27538b0f9d973 not found: ID does not exist" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.251004 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.251431 4914 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.516092 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hqbkf"] Jan 27 14:05:51 crc kubenswrapper[4914]: I0127 14:05:51.527556 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hqbkf"] Jan 27 14:05:52 crc kubenswrapper[4914]: I0127 14:05:52.305526 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d2aabe6-c9f1-4002-a2dd-468a56b1b6db" path="/var/lib/kubelet/pods/7d2aabe6-c9f1-4002-a2dd-468a56b1b6db/volumes" Jan 27 14:05:52 crc kubenswrapper[4914]: I0127 14:05:52.564177 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:52 crc kubenswrapper[4914]: I0127 14:05:52.717647 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:05:53 crc kubenswrapper[4914]: I0127 14:05:53.728180 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-794d7bcbcd-drzqz" Jan 27 14:05:55 crc kubenswrapper[4914]: I0127 14:05:55.285995 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 14:05:55 crc kubenswrapper[4914]: I0127 14:05:55.350509 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:05:56 crc kubenswrapper[4914]: I0127 14:05:56.228138 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ab2a075c-06a5-4864-95e9-d973906bc0c6" containerName="cinder-scheduler" containerID="cri-o://ee07a18e3deca062504a1503ca99b51c73bd894c734d2f385b85cd098eb61cd0" gracePeriod=30 Jan 27 14:05:56 crc kubenswrapper[4914]: I0127 14:05:56.228288 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ab2a075c-06a5-4864-95e9-d973906bc0c6" containerName="probe" containerID="cri-o://c386a6d4dfb728f38ecd7278714db242195140c16a55943c192511d047516a19" gracePeriod=30 Jan 27 14:05:56 crc kubenswrapper[4914]: I0127 14:05:56.552596 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.151065 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 14:05:57 crc kubenswrapper[4914]: E0127 14:05:57.151884 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2aabe6-c9f1-4002-a2dd-468a56b1b6db" containerName="init" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.151910 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2aabe6-c9f1-4002-a2dd-468a56b1b6db" containerName="init" Jan 27 14:05:57 crc kubenswrapper[4914]: E0127 14:05:57.151930 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2aabe6-c9f1-4002-a2dd-468a56b1b6db" containerName="dnsmasq-dns" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.151940 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2aabe6-c9f1-4002-a2dd-468a56b1b6db" containerName="dnsmasq-dns" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.152158 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d2aabe6-c9f1-4002-a2dd-468a56b1b6db" containerName="dnsmasq-dns" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.153224 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.155732 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-2dcmf" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.155939 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.157343 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.170568 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.220195 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.254929 4914 generic.go:334] "Generic (PLEG): container finished" podID="ab2a075c-06a5-4864-95e9-d973906bc0c6" containerID="c386a6d4dfb728f38ecd7278714db242195140c16a55943c192511d047516a19" exitCode=0 Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.254981 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab2a075c-06a5-4864-95e9-d973906bc0c6","Type":"ContainerDied","Data":"c386a6d4dfb728f38ecd7278714db242195140c16a55943c192511d047516a19"} Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.268325 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ebffe9-3030-466c-adbf-83deadb5d5d0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c3ebffe9-3030-466c-adbf-83deadb5d5d0\") " pod="openstack/openstackclient" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.268381 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3ebffe9-3030-466c-adbf-83deadb5d5d0-openstack-config-secret\") pod \"openstackclient\" (UID: \"c3ebffe9-3030-466c-adbf-83deadb5d5d0\") " pod="openstack/openstackclient" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.268420 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz4n9\" (UniqueName: \"kubernetes.io/projected/c3ebffe9-3030-466c-adbf-83deadb5d5d0-kube-api-access-vz4n9\") pod \"openstackclient\" (UID: \"c3ebffe9-3030-466c-adbf-83deadb5d5d0\") " pod="openstack/openstackclient" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.268725 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3ebffe9-3030-466c-adbf-83deadb5d5d0-openstack-config\") pod \"openstackclient\" (UID: \"c3ebffe9-3030-466c-adbf-83deadb5d5d0\") " pod="openstack/openstackclient" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.370648 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ebffe9-3030-466c-adbf-83deadb5d5d0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c3ebffe9-3030-466c-adbf-83deadb5d5d0\") " pod="openstack/openstackclient" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.370705 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3ebffe9-3030-466c-adbf-83deadb5d5d0-openstack-config-secret\") pod \"openstackclient\" (UID: \"c3ebffe9-3030-466c-adbf-83deadb5d5d0\") " pod="openstack/openstackclient" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.370739 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz4n9\" (UniqueName: \"kubernetes.io/projected/c3ebffe9-3030-466c-adbf-83deadb5d5d0-kube-api-access-vz4n9\") pod \"openstackclient\" (UID: \"c3ebffe9-3030-466c-adbf-83deadb5d5d0\") " pod="openstack/openstackclient" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.370975 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3ebffe9-3030-466c-adbf-83deadb5d5d0-openstack-config\") pod \"openstackclient\" (UID: \"c3ebffe9-3030-466c-adbf-83deadb5d5d0\") " pod="openstack/openstackclient" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.372247 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3ebffe9-3030-466c-adbf-83deadb5d5d0-openstack-config\") pod \"openstackclient\" (UID: \"c3ebffe9-3030-466c-adbf-83deadb5d5d0\") " pod="openstack/openstackclient" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.384784 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ebffe9-3030-466c-adbf-83deadb5d5d0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c3ebffe9-3030-466c-adbf-83deadb5d5d0\") " pod="openstack/openstackclient" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.385258 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3ebffe9-3030-466c-adbf-83deadb5d5d0-openstack-config-secret\") pod \"openstackclient\" (UID: \"c3ebffe9-3030-466c-adbf-83deadb5d5d0\") " pod="openstack/openstackclient" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.389785 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz4n9\" (UniqueName: \"kubernetes.io/projected/c3ebffe9-3030-466c-adbf-83deadb5d5d0-kube-api-access-vz4n9\") pod \"openstackclient\" (UID: \"c3ebffe9-3030-466c-adbf-83deadb5d5d0\") " pod="openstack/openstackclient" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.484630 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.878507 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.999251 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-combined-ca-bundle\") pod \"ab2a075c-06a5-4864-95e9-d973906bc0c6\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.999379 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-config-data-custom\") pod \"ab2a075c-06a5-4864-95e9-d973906bc0c6\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.999417 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab2a075c-06a5-4864-95e9-d973906bc0c6-etc-machine-id\") pod \"ab2a075c-06a5-4864-95e9-d973906bc0c6\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.999449 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-config-data\") pod \"ab2a075c-06a5-4864-95e9-d973906bc0c6\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.999473 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n89bn\" (UniqueName: \"kubernetes.io/projected/ab2a075c-06a5-4864-95e9-d973906bc0c6-kube-api-access-n89bn\") pod \"ab2a075c-06a5-4864-95e9-d973906bc0c6\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " Jan 27 14:05:57 crc kubenswrapper[4914]: I0127 14:05:57.999624 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-scripts\") pod \"ab2a075c-06a5-4864-95e9-d973906bc0c6\" (UID: \"ab2a075c-06a5-4864-95e9-d973906bc0c6\") " Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.000808 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab2a075c-06a5-4864-95e9-d973906bc0c6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ab2a075c-06a5-4864-95e9-d973906bc0c6" (UID: "ab2a075c-06a5-4864-95e9-d973906bc0c6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.023668 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-scripts" (OuterVolumeSpecName: "scripts") pod "ab2a075c-06a5-4864-95e9-d973906bc0c6" (UID: "ab2a075c-06a5-4864-95e9-d973906bc0c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.023720 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2a075c-06a5-4864-95e9-d973906bc0c6-kube-api-access-n89bn" (OuterVolumeSpecName: "kube-api-access-n89bn") pod "ab2a075c-06a5-4864-95e9-d973906bc0c6" (UID: "ab2a075c-06a5-4864-95e9-d973906bc0c6"). InnerVolumeSpecName "kube-api-access-n89bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.023730 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ab2a075c-06a5-4864-95e9-d973906bc0c6" (UID: "ab2a075c-06a5-4864-95e9-d973906bc0c6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:58 crc kubenswrapper[4914]: W0127 14:05:58.047569 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3ebffe9_3030_466c_adbf_83deadb5d5d0.slice/crio-e0c3dd05fb7cf68bd8e4076d5b2f3ddd1b659cb96b926914c9c0a0de534da3c3 WatchSource:0}: Error finding container e0c3dd05fb7cf68bd8e4076d5b2f3ddd1b659cb96b926914c9c0a0de534da3c3: Status 404 returned error can't find the container with id e0c3dd05fb7cf68bd8e4076d5b2f3ddd1b659cb96b926914c9c0a0de534da3c3 Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.048067 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.066231 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab2a075c-06a5-4864-95e9-d973906bc0c6" (UID: "ab2a075c-06a5-4864-95e9-d973906bc0c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.101943 4914 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.101976 4914 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab2a075c-06a5-4864-95e9-d973906bc0c6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.101986 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n89bn\" (UniqueName: \"kubernetes.io/projected/ab2a075c-06a5-4864-95e9-d973906bc0c6-kube-api-access-n89bn\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.101997 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.102006 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.129291 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-config-data" (OuterVolumeSpecName: "config-data") pod "ab2a075c-06a5-4864-95e9-d973906bc0c6" (UID: "ab2a075c-06a5-4864-95e9-d973906bc0c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.203421 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2a075c-06a5-4864-95e9-d973906bc0c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.265029 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c3ebffe9-3030-466c-adbf-83deadb5d5d0","Type":"ContainerStarted","Data":"e0c3dd05fb7cf68bd8e4076d5b2f3ddd1b659cb96b926914c9c0a0de534da3c3"} Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.269341 4914 generic.go:334] "Generic (PLEG): container finished" podID="ab2a075c-06a5-4864-95e9-d973906bc0c6" containerID="ee07a18e3deca062504a1503ca99b51c73bd894c734d2f385b85cd098eb61cd0" exitCode=0 Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.269380 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab2a075c-06a5-4864-95e9-d973906bc0c6","Type":"ContainerDied","Data":"ee07a18e3deca062504a1503ca99b51c73bd894c734d2f385b85cd098eb61cd0"} Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.269419 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab2a075c-06a5-4864-95e9-d973906bc0c6","Type":"ContainerDied","Data":"223141cf4491e1a19c66fda459e08487f138d0c24873701a3e6572bfbf1782d0"} Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.269430 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.269439 4914 scope.go:117] "RemoveContainer" containerID="c386a6d4dfb728f38ecd7278714db242195140c16a55943c192511d047516a19" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.313975 4914 scope.go:117] "RemoveContainer" containerID="ee07a18e3deca062504a1503ca99b51c73bd894c734d2f385b85cd098eb61cd0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.316948 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.320276 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.336667 4914 scope.go:117] "RemoveContainer" containerID="c386a6d4dfb728f38ecd7278714db242195140c16a55943c192511d047516a19" Jan 27 14:05:58 crc kubenswrapper[4914]: E0127 14:05:58.337427 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c386a6d4dfb728f38ecd7278714db242195140c16a55943c192511d047516a19\": container with ID starting with c386a6d4dfb728f38ecd7278714db242195140c16a55943c192511d047516a19 not found: ID does not exist" containerID="c386a6d4dfb728f38ecd7278714db242195140c16a55943c192511d047516a19" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.337486 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c386a6d4dfb728f38ecd7278714db242195140c16a55943c192511d047516a19"} err="failed to get container status \"c386a6d4dfb728f38ecd7278714db242195140c16a55943c192511d047516a19\": rpc error: code = NotFound desc = could not find container \"c386a6d4dfb728f38ecd7278714db242195140c16a55943c192511d047516a19\": container with ID starting with c386a6d4dfb728f38ecd7278714db242195140c16a55943c192511d047516a19 not found: ID does not exist" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.337508 4914 scope.go:117] "RemoveContainer" containerID="ee07a18e3deca062504a1503ca99b51c73bd894c734d2f385b85cd098eb61cd0" Jan 27 14:05:58 crc kubenswrapper[4914]: E0127 14:05:58.337740 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee07a18e3deca062504a1503ca99b51c73bd894c734d2f385b85cd098eb61cd0\": container with ID starting with ee07a18e3deca062504a1503ca99b51c73bd894c734d2f385b85cd098eb61cd0 not found: ID does not exist" containerID="ee07a18e3deca062504a1503ca99b51c73bd894c734d2f385b85cd098eb61cd0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.337767 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee07a18e3deca062504a1503ca99b51c73bd894c734d2f385b85cd098eb61cd0"} err="failed to get container status \"ee07a18e3deca062504a1503ca99b51c73bd894c734d2f385b85cd098eb61cd0\": rpc error: code = NotFound desc = could not find container \"ee07a18e3deca062504a1503ca99b51c73bd894c734d2f385b85cd098eb61cd0\": container with ID starting with ee07a18e3deca062504a1503ca99b51c73bd894c734d2f385b85cd098eb61cd0 not found: ID does not exist" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.345817 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:05:58 crc kubenswrapper[4914]: E0127 14:05:58.346442 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2a075c-06a5-4864-95e9-d973906bc0c6" containerName="cinder-scheduler" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.346476 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2a075c-06a5-4864-95e9-d973906bc0c6" containerName="cinder-scheduler" Jan 27 14:05:58 crc kubenswrapper[4914]: E0127 14:05:58.346515 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2a075c-06a5-4864-95e9-d973906bc0c6" containerName="probe" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.346528 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2a075c-06a5-4864-95e9-d973906bc0c6" containerName="probe" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.346823 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2a075c-06a5-4864-95e9-d973906bc0c6" containerName="probe" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.346905 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2a075c-06a5-4864-95e9-d973906bc0c6" containerName="cinder-scheduler" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.348446 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.351557 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.386933 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.406952 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.407315 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e86ce2c8-20a7-4166-82ba-334e8463907b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.407433 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-config-data\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.407572 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntrbb\" (UniqueName: \"kubernetes.io/projected/e86ce2c8-20a7-4166-82ba-334e8463907b-kube-api-access-ntrbb\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.407656 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-scripts\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.407762 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.509880 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e86ce2c8-20a7-4166-82ba-334e8463907b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.510246 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-config-data\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.509994 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e86ce2c8-20a7-4166-82ba-334e8463907b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.510334 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntrbb\" (UniqueName: \"kubernetes.io/projected/e86ce2c8-20a7-4166-82ba-334e8463907b-kube-api-access-ntrbb\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.510489 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-scripts\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.510546 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.510600 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.516387 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-scripts\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.517367 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.517468 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.519090 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-config-data\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.527957 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntrbb\" (UniqueName: \"kubernetes.io/projected/e86ce2c8-20a7-4166-82ba-334e8463907b-kube-api-access-ntrbb\") pod \"cinder-scheduler-0\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " pod="openstack/cinder-scheduler-0" Jan 27 14:05:58 crc kubenswrapper[4914]: I0127 14:05:58.676334 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:05:59 crc kubenswrapper[4914]: I0127 14:05:59.141754 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:05:59 crc kubenswrapper[4914]: I0127 14:05:59.286862 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e86ce2c8-20a7-4166-82ba-334e8463907b","Type":"ContainerStarted","Data":"bfc182738a6855e1a3b4144352a83288e1ec6933d0b63463ab20a0a3921e5527"} Jan 27 14:06:00 crc kubenswrapper[4914]: I0127 14:06:00.054917 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:06:00 crc kubenswrapper[4914]: I0127 14:06:00.130215 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8755b5764-8ts5v"] Jan 27 14:06:00 crc kubenswrapper[4914]: I0127 14:06:00.130537 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8755b5764-8ts5v" podUID="9da1cf46-054d-434c-9b77-a82cfd6353f3" containerName="neutron-httpd" containerID="cri-o://2c79b93366fe15b0506636708d94a36f41b21e63a852e5e89f23185ea10d7f8d" gracePeriod=30 Jan 27 14:06:00 crc kubenswrapper[4914]: I0127 14:06:00.132515 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8755b5764-8ts5v" podUID="9da1cf46-054d-434c-9b77-a82cfd6353f3" containerName="neutron-api" containerID="cri-o://de8dfd4f95672cd21698c72e3cd09e576103393c5966fd311aa5740381d5c43f" gracePeriod=30 Jan 27 14:06:00 crc kubenswrapper[4914]: I0127 14:06:00.349682 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2a075c-06a5-4864-95e9-d973906bc0c6" path="/var/lib/kubelet/pods/ab2a075c-06a5-4864-95e9-d973906bc0c6/volumes" Jan 27 14:06:00 crc kubenswrapper[4914]: I0127 14:06:00.351689 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e86ce2c8-20a7-4166-82ba-334e8463907b","Type":"ContainerStarted","Data":"95858ea340a1388ecfc38da9cdfd26f47293c7ef6623f13b5eaaa6ff84919e55"} Jan 27 14:06:00 crc kubenswrapper[4914]: I0127 14:06:00.849446 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75b4645c86-9r9q2" podUID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 27 14:06:01 crc kubenswrapper[4914]: I0127 14:06:01.361246 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e86ce2c8-20a7-4166-82ba-334e8463907b","Type":"ContainerStarted","Data":"8819080a7f82c8524d4b0ea1edfc7d2d5f35b2b4e7832f2d91ce1733db12fbf4"} Jan 27 14:06:01 crc kubenswrapper[4914]: I0127 14:06:01.378844 4914 generic.go:334] "Generic (PLEG): container finished" podID="9da1cf46-054d-434c-9b77-a82cfd6353f3" containerID="2c79b93366fe15b0506636708d94a36f41b21e63a852e5e89f23185ea10d7f8d" exitCode=0 Jan 27 14:06:01 crc kubenswrapper[4914]: I0127 14:06:01.378931 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8755b5764-8ts5v" event={"ID":"9da1cf46-054d-434c-9b77-a82cfd6353f3","Type":"ContainerDied","Data":"2c79b93366fe15b0506636708d94a36f41b21e63a852e5e89f23185ea10d7f8d"} Jan 27 14:06:01 crc kubenswrapper[4914]: I0127 14:06:01.402233 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.402210458 podStartE2EDuration="3.402210458s" podCreationTimestamp="2026-01-27 14:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:01.390380993 +0000 UTC m=+1319.702731078" watchObservedRunningTime="2026-01-27 14:06:01.402210458 +0000 UTC m=+1319.714560543" Jan 27 14:06:02 crc kubenswrapper[4914]: I0127 14:06:02.427242 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:06:02 crc kubenswrapper[4914]: I0127 14:06:02.427802 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="4dd99d95-c640-4bb1-ab91-b5415689764b" containerName="cinder-api-log" containerID="cri-o://bb8b8da350600f27f8928fd62aeb511f7c81a4b9c3e9b73389552c0153506c5e" gracePeriod=30 Jan 27 14:06:02 crc kubenswrapper[4914]: I0127 14:06:02.428290 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="4dd99d95-c640-4bb1-ab91-b5415689764b" containerName="cinder-api" containerID="cri-o://9d16b8915a376fc77b276a361cb7b2b796786b283022e1ca9d05600f37764706" gracePeriod=30 Jan 27 14:06:02 crc kubenswrapper[4914]: I0127 14:06:02.435662 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="4dd99d95-c640-4bb1-ab91-b5415689764b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.171:8776/healthcheck\": read tcp 10.217.0.2:37212->10.217.0.171:8776: read: connection reset by peer" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.415521 4914 generic.go:334] "Generic (PLEG): container finished" podID="4dd99d95-c640-4bb1-ab91-b5415689764b" containerID="bb8b8da350600f27f8928fd62aeb511f7c81a4b9c3e9b73389552c0153506c5e" exitCode=143 Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.415885 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4dd99d95-c640-4bb1-ab91-b5415689764b","Type":"ContainerDied","Data":"bb8b8da350600f27f8928fd62aeb511f7c81a4b9c3e9b73389552c0153506c5e"} Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.677043 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.709750 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6786c8f89-752mb"] Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.711293 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.713861 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.715017 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.715175 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.725934 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6786c8f89-752mb"] Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.854416 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh8dc\" (UniqueName: \"kubernetes.io/projected/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-kube-api-access-rh8dc\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.854770 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-public-tls-certs\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.854978 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-log-httpd\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.855123 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-etc-swift\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.855256 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-internal-tls-certs\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.855372 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-run-httpd\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.855483 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-config-data\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.855623 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-combined-ca-bundle\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.957724 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh8dc\" (UniqueName: \"kubernetes.io/projected/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-kube-api-access-rh8dc\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.957818 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-public-tls-certs\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.957866 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-log-httpd\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.957909 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-etc-swift\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.957941 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-internal-tls-certs\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.957966 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-run-httpd\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.957990 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-config-data\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.958038 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-combined-ca-bundle\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.961393 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-log-httpd\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.961711 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-run-httpd\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.964227 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-public-tls-certs\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.965016 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-combined-ca-bundle\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.977282 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-internal-tls-certs\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.979102 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-config-data\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.981121 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-etc-swift\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:03 crc kubenswrapper[4914]: I0127 14:06:03.981508 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh8dc\" (UniqueName: \"kubernetes.io/projected/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-kube-api-access-rh8dc\") pod \"swift-proxy-6786c8f89-752mb\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:04 crc kubenswrapper[4914]: I0127 14:06:04.032529 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:04 crc kubenswrapper[4914]: I0127 14:06:04.341919 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:04 crc kubenswrapper[4914]: I0127 14:06:04.342262 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="ceilometer-central-agent" containerID="cri-o://d529544a108a14274660565d4308a8cdf8c8eb0f9c3ba9367d41eb505adb93f7" gracePeriod=30 Jan 27 14:06:04 crc kubenswrapper[4914]: I0127 14:06:04.343105 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="sg-core" containerID="cri-o://0b4fe6a455ec73498c3e4b0387aed9a6ec57a320dfeda2c8d6cc3ee0b5b12782" gracePeriod=30 Jan 27 14:06:04 crc kubenswrapper[4914]: I0127 14:06:04.343191 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="ceilometer-notification-agent" containerID="cri-o://77c0559587084688e5b8def1b291cbbae0217cd025267c81aeb6e4eef954dd62" gracePeriod=30 Jan 27 14:06:04 crc kubenswrapper[4914]: I0127 14:06:04.343190 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="proxy-httpd" containerID="cri-o://580dd87b74cd74981095091785ee9e27784b7f0a0be14e913d939f32bd7c790d" gracePeriod=30 Jan 27 14:06:04 crc kubenswrapper[4914]: I0127 14:06:04.360292 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.170:3000/\": EOF" Jan 27 14:06:05 crc kubenswrapper[4914]: I0127 14:06:05.446002 4914 generic.go:334] "Generic (PLEG): container finished" podID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerID="580dd87b74cd74981095091785ee9e27784b7f0a0be14e913d939f32bd7c790d" exitCode=0 Jan 27 14:06:05 crc kubenswrapper[4914]: I0127 14:06:05.446264 4914 generic.go:334] "Generic (PLEG): container finished" podID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerID="0b4fe6a455ec73498c3e4b0387aed9a6ec57a320dfeda2c8d6cc3ee0b5b12782" exitCode=2 Jan 27 14:06:05 crc kubenswrapper[4914]: I0127 14:06:05.446075 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0","Type":"ContainerDied","Data":"580dd87b74cd74981095091785ee9e27784b7f0a0be14e913d939f32bd7c790d"} Jan 27 14:06:05 crc kubenswrapper[4914]: I0127 14:06:05.446312 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0","Type":"ContainerDied","Data":"0b4fe6a455ec73498c3e4b0387aed9a6ec57a320dfeda2c8d6cc3ee0b5b12782"} Jan 27 14:06:05 crc kubenswrapper[4914]: I0127 14:06:05.446332 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0","Type":"ContainerDied","Data":"d529544a108a14274660565d4308a8cdf8c8eb0f9c3ba9367d41eb505adb93f7"} Jan 27 14:06:05 crc kubenswrapper[4914]: I0127 14:06:05.446274 4914 generic.go:334] "Generic (PLEG): container finished" podID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerID="d529544a108a14274660565d4308a8cdf8c8eb0f9c3ba9367d41eb505adb93f7" exitCode=0 Jan 27 14:06:05 crc kubenswrapper[4914]: I0127 14:06:05.762634 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="4dd99d95-c640-4bb1-ab91-b5415689764b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.171:8776/healthcheck\": read tcp 10.217.0.2:37226->10.217.0.171:8776: read: connection reset by peer" Jan 27 14:06:06 crc kubenswrapper[4914]: I0127 14:06:06.456025 4914 generic.go:334] "Generic (PLEG): container finished" podID="9da1cf46-054d-434c-9b77-a82cfd6353f3" containerID="de8dfd4f95672cd21698c72e3cd09e576103393c5966fd311aa5740381d5c43f" exitCode=0 Jan 27 14:06:06 crc kubenswrapper[4914]: I0127 14:06:06.456065 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8755b5764-8ts5v" event={"ID":"9da1cf46-054d-434c-9b77-a82cfd6353f3","Type":"ContainerDied","Data":"de8dfd4f95672cd21698c72e3cd09e576103393c5966fd311aa5740381d5c43f"} Jan 27 14:06:06 crc kubenswrapper[4914]: I0127 14:06:06.458225 4914 generic.go:334] "Generic (PLEG): container finished" podID="4dd99d95-c640-4bb1-ab91-b5415689764b" containerID="9d16b8915a376fc77b276a361cb7b2b796786b283022e1ca9d05600f37764706" exitCode=0 Jan 27 14:06:06 crc kubenswrapper[4914]: I0127 14:06:06.458249 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4dd99d95-c640-4bb1-ab91-b5415689764b","Type":"ContainerDied","Data":"9d16b8915a376fc77b276a361cb7b2b796786b283022e1ca9d05600f37764706"} Jan 27 14:06:07 crc kubenswrapper[4914]: I0127 14:06:07.474229 4914 generic.go:334] "Generic (PLEG): container finished" podID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerID="77c0559587084688e5b8def1b291cbbae0217cd025267c81aeb6e4eef954dd62" exitCode=0 Jan 27 14:06:07 crc kubenswrapper[4914]: I0127 14:06:07.474337 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0","Type":"ContainerDied","Data":"77c0559587084688e5b8def1b291cbbae0217cd025267c81aeb6e4eef954dd62"} Jan 27 14:06:07 crc kubenswrapper[4914]: I0127 14:06:07.693563 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:06:07 crc kubenswrapper[4914]: I0127 14:06:07.693616 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:06:08 crc kubenswrapper[4914]: I0127 14:06:08.936573 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 14:06:08 crc kubenswrapper[4914]: I0127 14:06:08.999356 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.180774 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.212532 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.265678 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-log-httpd\") pod \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.265724 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-config-data-custom\") pod \"4dd99d95-c640-4bb1-ab91-b5415689764b\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.265753 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-combined-ca-bundle\") pod \"4dd99d95-c640-4bb1-ab91-b5415689764b\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.265786 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-config-data\") pod \"4dd99d95-c640-4bb1-ab91-b5415689764b\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.265810 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dd99d95-c640-4bb1-ab91-b5415689764b-logs\") pod \"4dd99d95-c640-4bb1-ab91-b5415689764b\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.265861 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-public-tls-certs\") pod \"4dd99d95-c640-4bb1-ab91-b5415689764b\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.265901 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-run-httpd\") pod \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.265953 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-internal-tls-certs\") pod \"4dd99d95-c640-4bb1-ab91-b5415689764b\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.265989 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4dd99d95-c640-4bb1-ab91-b5415689764b-etc-machine-id\") pod \"4dd99d95-c640-4bb1-ab91-b5415689764b\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.266013 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-combined-ca-bundle\") pod \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.266051 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-scripts\") pod \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.266123 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbc7f\" (UniqueName: \"kubernetes.io/projected/4dd99d95-c640-4bb1-ab91-b5415689764b-kube-api-access-bbc7f\") pod \"4dd99d95-c640-4bb1-ab91-b5415689764b\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.266211 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-sg-core-conf-yaml\") pod \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.266238 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fhp6\" (UniqueName: \"kubernetes.io/projected/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-kube-api-access-4fhp6\") pod \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.266266 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-scripts\") pod \"4dd99d95-c640-4bb1-ab91-b5415689764b\" (UID: \"4dd99d95-c640-4bb1-ab91-b5415689764b\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.266323 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-config-data\") pod \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\" (UID: \"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.266494 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" (UID: "fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.266766 4914 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.268106 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" (UID: "fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.268473 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dd99d95-c640-4bb1-ab91-b5415689764b-logs" (OuterVolumeSpecName: "logs") pod "4dd99d95-c640-4bb1-ab91-b5415689764b" (UID: "4dd99d95-c640-4bb1-ab91-b5415689764b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.271928 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dd99d95-c640-4bb1-ab91-b5415689764b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4dd99d95-c640-4bb1-ab91-b5415689764b" (UID: "4dd99d95-c640-4bb1-ab91-b5415689764b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.275794 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd99d95-c640-4bb1-ab91-b5415689764b-kube-api-access-bbc7f" (OuterVolumeSpecName: "kube-api-access-bbc7f") pod "4dd99d95-c640-4bb1-ab91-b5415689764b" (UID: "4dd99d95-c640-4bb1-ab91-b5415689764b"). InnerVolumeSpecName "kube-api-access-bbc7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.276629 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-kube-api-access-4fhp6" (OuterVolumeSpecName: "kube-api-access-4fhp6") pod "fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" (UID: "fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0"). InnerVolumeSpecName "kube-api-access-4fhp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.279936 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-scripts" (OuterVolumeSpecName: "scripts") pod "fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" (UID: "fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.280036 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4dd99d95-c640-4bb1-ab91-b5415689764b" (UID: "4dd99d95-c640-4bb1-ab91-b5415689764b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.282041 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-scripts" (OuterVolumeSpecName: "scripts") pod "4dd99d95-c640-4bb1-ab91-b5415689764b" (UID: "4dd99d95-c640-4bb1-ab91-b5415689764b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.314347 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.331569 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" (UID: "fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.357007 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4dd99d95-c640-4bb1-ab91-b5415689764b" (UID: "4dd99d95-c640-4bb1-ab91-b5415689764b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.357991 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4dd99d95-c640-4bb1-ab91-b5415689764b" (UID: "4dd99d95-c640-4bb1-ab91-b5415689764b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.367922 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-httpd-config\") pod \"9da1cf46-054d-434c-9b77-a82cfd6353f3\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.367992 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-combined-ca-bundle\") pod \"9da1cf46-054d-434c-9b77-a82cfd6353f3\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.368015 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-config\") pod \"9da1cf46-054d-434c-9b77-a82cfd6353f3\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.368055 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-ovndb-tls-certs\") pod \"9da1cf46-054d-434c-9b77-a82cfd6353f3\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.368165 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb5c6\" (UniqueName: \"kubernetes.io/projected/9da1cf46-054d-434c-9b77-a82cfd6353f3-kube-api-access-kb5c6\") pod \"9da1cf46-054d-434c-9b77-a82cfd6353f3\" (UID: \"9da1cf46-054d-434c-9b77-a82cfd6353f3\") " Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.368513 4914 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.368531 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.368540 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dd99d95-c640-4bb1-ab91-b5415689764b-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.368550 4914 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.368558 4914 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.368566 4914 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4dd99d95-c640-4bb1-ab91-b5415689764b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.368574 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.368581 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbc7f\" (UniqueName: \"kubernetes.io/projected/4dd99d95-c640-4bb1-ab91-b5415689764b-kube-api-access-bbc7f\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.368590 4914 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.368599 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fhp6\" (UniqueName: \"kubernetes.io/projected/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-kube-api-access-4fhp6\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.368606 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.381051 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9da1cf46-054d-434c-9b77-a82cfd6353f3-kube-api-access-kb5c6" (OuterVolumeSpecName: "kube-api-access-kb5c6") pod "9da1cf46-054d-434c-9b77-a82cfd6353f3" (UID: "9da1cf46-054d-434c-9b77-a82cfd6353f3"). InnerVolumeSpecName "kube-api-access-kb5c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.386195 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4dd99d95-c640-4bb1-ab91-b5415689764b" (UID: "4dd99d95-c640-4bb1-ab91-b5415689764b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.387644 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "9da1cf46-054d-434c-9b77-a82cfd6353f3" (UID: "9da1cf46-054d-434c-9b77-a82cfd6353f3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.404792 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-config-data" (OuterVolumeSpecName: "config-data") pod "4dd99d95-c640-4bb1-ab91-b5415689764b" (UID: "4dd99d95-c640-4bb1-ab91-b5415689764b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.414680 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" (UID: "fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.459180 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9da1cf46-054d-434c-9b77-a82cfd6353f3" (UID: "9da1cf46-054d-434c-9b77-a82cfd6353f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.459564 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-config-data" (OuterVolumeSpecName: "config-data") pod "fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" (UID: "fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.473601 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.473635 4914 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.473646 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.473655 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.473664 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.473673 4914 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd99d95-c640-4bb1-ab91-b5415689764b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.473682 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb5c6\" (UniqueName: \"kubernetes.io/projected/9da1cf46-054d-434c-9b77-a82cfd6353f3-kube-api-access-kb5c6\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.480401 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-config" (OuterVolumeSpecName: "config") pod "9da1cf46-054d-434c-9b77-a82cfd6353f3" (UID: "9da1cf46-054d-434c-9b77-a82cfd6353f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.496016 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "9da1cf46-054d-434c-9b77-a82cfd6353f3" (UID: "9da1cf46-054d-434c-9b77-a82cfd6353f3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.501673 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8755b5764-8ts5v" event={"ID":"9da1cf46-054d-434c-9b77-a82cfd6353f3","Type":"ContainerDied","Data":"8a62bcd05b6f41cacdde48ae03cd4bb2599caab5b70eddb5057a44a5403672eb"} Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.501723 4914 scope.go:117] "RemoveContainer" containerID="2c79b93366fe15b0506636708d94a36f41b21e63a852e5e89f23185ea10d7f8d" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.501855 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8755b5764-8ts5v" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.511742 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4dd99d95-c640-4bb1-ab91-b5415689764b","Type":"ContainerDied","Data":"1efe3053add5d1e512258e04b402fa99c0555582724a44e406c30e5aa141ebf5"} Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.511821 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.518864 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c3ebffe9-3030-466c-adbf-83deadb5d5d0","Type":"ContainerStarted","Data":"1ea013e5f792ac6d2488aa562a56f449458f26b3bce69dcc71e3d9ce5fe30544"} Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.527514 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e86ce2c8-20a7-4166-82ba-334e8463907b" containerName="cinder-scheduler" containerID="cri-o://95858ea340a1388ecfc38da9cdfd26f47293c7ef6623f13b5eaaa6ff84919e55" gracePeriod=30 Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.527683 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.527726 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e86ce2c8-20a7-4166-82ba-334e8463907b" containerName="probe" containerID="cri-o://8819080a7f82c8524d4b0ea1edfc7d2d5f35b2b4e7832f2d91ce1733db12fbf4" gracePeriod=30 Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.529049 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0","Type":"ContainerDied","Data":"5da504e0d68ca989f1ced31cbd88ddd877b0ba0cd68eba696730815de8f3d411"} Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.561391 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6786c8f89-752mb"] Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.568743 4914 scope.go:117] "RemoveContainer" containerID="de8dfd4f95672cd21698c72e3cd09e576103393c5966fd311aa5740381d5c43f" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.570250 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.818711535 podStartE2EDuration="12.570232118s" podCreationTimestamp="2026-01-27 14:05:57 +0000 UTC" firstStartedPulling="2026-01-27 14:05:58.052737593 +0000 UTC m=+1316.365087678" lastFinishedPulling="2026-01-27 14:06:08.804258176 +0000 UTC m=+1327.116608261" observedRunningTime="2026-01-27 14:06:09.535867576 +0000 UTC m=+1327.848217661" watchObservedRunningTime="2026-01-27 14:06:09.570232118 +0000 UTC m=+1327.882582203" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.579821 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.579872 4914 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da1cf46-054d-434c-9b77-a82cfd6353f3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.598987 4914 scope.go:117] "RemoveContainer" containerID="9d16b8915a376fc77b276a361cb7b2b796786b283022e1ca9d05600f37764706" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.604201 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8755b5764-8ts5v"] Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.612854 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8755b5764-8ts5v"] Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.624988 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.647535 4914 scope.go:117] "RemoveContainer" containerID="bb8b8da350600f27f8928fd62aeb511f7c81a4b9c3e9b73389552c0153506c5e" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.647694 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.655195 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.667916 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:06:09 crc kubenswrapper[4914]: E0127 14:06:09.668358 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da1cf46-054d-434c-9b77-a82cfd6353f3" containerName="neutron-httpd" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.668379 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da1cf46-054d-434c-9b77-a82cfd6353f3" containerName="neutron-httpd" Jan 27 14:06:09 crc kubenswrapper[4914]: E0127 14:06:09.668395 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="ceilometer-notification-agent" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.668404 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="ceilometer-notification-agent" Jan 27 14:06:09 crc kubenswrapper[4914]: E0127 14:06:09.668420 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="proxy-httpd" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.668429 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="proxy-httpd" Jan 27 14:06:09 crc kubenswrapper[4914]: E0127 14:06:09.668448 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd99d95-c640-4bb1-ab91-b5415689764b" containerName="cinder-api" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.668457 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd99d95-c640-4bb1-ab91-b5415689764b" containerName="cinder-api" Jan 27 14:06:09 crc kubenswrapper[4914]: E0127 14:06:09.668474 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da1cf46-054d-434c-9b77-a82cfd6353f3" containerName="neutron-api" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.668482 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da1cf46-054d-434c-9b77-a82cfd6353f3" containerName="neutron-api" Jan 27 14:06:09 crc kubenswrapper[4914]: E0127 14:06:09.668501 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd99d95-c640-4bb1-ab91-b5415689764b" containerName="cinder-api-log" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.668509 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd99d95-c640-4bb1-ab91-b5415689764b" containerName="cinder-api-log" Jan 27 14:06:09 crc kubenswrapper[4914]: E0127 14:06:09.668527 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="ceilometer-central-agent" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.668535 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="ceilometer-central-agent" Jan 27 14:06:09 crc kubenswrapper[4914]: E0127 14:06:09.668552 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="sg-core" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.668558 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="sg-core" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.668752 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd99d95-c640-4bb1-ab91-b5415689764b" containerName="cinder-api" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.668765 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="ceilometer-central-agent" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.668780 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da1cf46-054d-434c-9b77-a82cfd6353f3" containerName="neutron-api" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.668790 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="proxy-httpd" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.668804 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da1cf46-054d-434c-9b77-a82cfd6353f3" containerName="neutron-httpd" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.668819 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd99d95-c640-4bb1-ab91-b5415689764b" containerName="cinder-api-log" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.668847 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="ceilometer-notification-agent" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.668862 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" containerName="sg-core" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.669950 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.671927 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.672410 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.672972 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.680571 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.691640 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.700350 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.720983 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.723514 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.724082 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.743123 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783041 4914 scope.go:117] "RemoveContainer" containerID="580dd87b74cd74981095091785ee9e27784b7f0a0be14e913d939f32bd7c790d" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783201 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tb4m\" (UniqueName: \"kubernetes.io/projected/a0ab6ceb-e872-4ca0-b532-80b329aa647f-kube-api-access-2tb4m\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783281 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783323 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783365 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6ceb-e872-4ca0-b532-80b329aa647f-run-httpd\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783394 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-scripts\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783417 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6ceb-e872-4ca0-b532-80b329aa647f-log-httpd\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783435 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhx4l\" (UniqueName: \"kubernetes.io/projected/6eb8743b-d452-400b-b2ef-818c074597e6-kube-api-access-dhx4l\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783478 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783518 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6eb8743b-d452-400b-b2ef-818c074597e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783547 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-config-data\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783573 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eb8743b-d452-400b-b2ef-818c074597e6-logs\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783607 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783628 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-scripts\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783657 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-config-data\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783707 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.783731 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.807958 4914 scope.go:117] "RemoveContainer" containerID="0b4fe6a455ec73498c3e4b0387aed9a6ec57a320dfeda2c8d6cc3ee0b5b12782" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.852811 4914 scope.go:117] "RemoveContainer" containerID="77c0559587084688e5b8def1b291cbbae0217cd025267c81aeb6e4eef954dd62" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.874150 4914 scope.go:117] "RemoveContainer" containerID="d529544a108a14274660565d4308a8cdf8c8eb0f9c3ba9367d41eb505adb93f7" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.885009 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.885078 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.885119 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6ceb-e872-4ca0-b532-80b329aa647f-run-httpd\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.885150 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-scripts\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.885168 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6ceb-e872-4ca0-b532-80b329aa647f-log-httpd\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.885183 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhx4l\" (UniqueName: \"kubernetes.io/projected/6eb8743b-d452-400b-b2ef-818c074597e6-kube-api-access-dhx4l\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.885223 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.885266 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6eb8743b-d452-400b-b2ef-818c074597e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.885295 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-config-data\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.885319 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eb8743b-d452-400b-b2ef-818c074597e6-logs\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.885347 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.885367 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-scripts\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.885394 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-config-data\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.885488 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.885513 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.885552 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tb4m\" (UniqueName: \"kubernetes.io/projected/a0ab6ceb-e872-4ca0-b532-80b329aa647f-kube-api-access-2tb4m\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.886703 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6eb8743b-d452-400b-b2ef-818c074597e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.887111 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eb8743b-d452-400b-b2ef-818c074597e6-logs\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.890361 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6ceb-e872-4ca0-b532-80b329aa647f-log-httpd\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.890423 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6ceb-e872-4ca0-b532-80b329aa647f-run-httpd\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.891004 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.891791 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-config-data\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.891859 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-scripts\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.903073 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.903610 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.904732 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.906329 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-scripts\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.906862 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhx4l\" (UniqueName: \"kubernetes.io/projected/6eb8743b-d452-400b-b2ef-818c074597e6-kube-api-access-dhx4l\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.908164 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb8743b-d452-400b-b2ef-818c074597e6-config-data\") pod \"cinder-api-0\" (UID: \"6eb8743b-d452-400b-b2ef-818c074597e6\") " pod="openstack/cinder-api-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.908510 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.908710 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tb4m\" (UniqueName: \"kubernetes.io/projected/a0ab6ceb-e872-4ca0-b532-80b329aa647f-kube-api-access-2tb4m\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:09 crc kubenswrapper[4914]: I0127 14:06:09.910649 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " pod="openstack/ceilometer-0" Jan 27 14:06:10 crc kubenswrapper[4914]: I0127 14:06:10.104682 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:06:10 crc kubenswrapper[4914]: I0127 14:06:10.128801 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:06:10 crc kubenswrapper[4914]: I0127 14:06:10.314051 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dd99d95-c640-4bb1-ab91-b5415689764b" path="/var/lib/kubelet/pods/4dd99d95-c640-4bb1-ab91-b5415689764b/volumes" Jan 27 14:06:10 crc kubenswrapper[4914]: I0127 14:06:10.315242 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9da1cf46-054d-434c-9b77-a82cfd6353f3" path="/var/lib/kubelet/pods/9da1cf46-054d-434c-9b77-a82cfd6353f3/volumes" Jan 27 14:06:10 crc kubenswrapper[4914]: I0127 14:06:10.315992 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0" path="/var/lib/kubelet/pods/fbc3e2d6-99a8-4002-ae3e-f7fb4edc06f0/volumes" Jan 27 14:06:10 crc kubenswrapper[4914]: I0127 14:06:10.545689 4914 generic.go:334] "Generic (PLEG): container finished" podID="e86ce2c8-20a7-4166-82ba-334e8463907b" containerID="8819080a7f82c8524d4b0ea1edfc7d2d5f35b2b4e7832f2d91ce1733db12fbf4" exitCode=0 Jan 27 14:06:10 crc kubenswrapper[4914]: I0127 14:06:10.545744 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e86ce2c8-20a7-4166-82ba-334e8463907b","Type":"ContainerDied","Data":"8819080a7f82c8524d4b0ea1edfc7d2d5f35b2b4e7832f2d91ce1733db12fbf4"} Jan 27 14:06:10 crc kubenswrapper[4914]: I0127 14:06:10.548126 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6786c8f89-752mb" event={"ID":"d4a1896c-aac3-4c71-8d04-e608cc34f5f6","Type":"ContainerStarted","Data":"41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f"} Jan 27 14:06:10 crc kubenswrapper[4914]: I0127 14:06:10.548177 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6786c8f89-752mb" event={"ID":"d4a1896c-aac3-4c71-8d04-e608cc34f5f6","Type":"ContainerStarted","Data":"a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3"} Jan 27 14:06:10 crc kubenswrapper[4914]: I0127 14:06:10.548193 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6786c8f89-752mb" event={"ID":"d4a1896c-aac3-4c71-8d04-e608cc34f5f6","Type":"ContainerStarted","Data":"42171d79af5d2240c055183be627aebb29edf70f3ef54a21688fb3f838a934ce"} Jan 27 14:06:10 crc kubenswrapper[4914]: I0127 14:06:10.569705 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6786c8f89-752mb" podStartSLOduration=7.569688779 podStartE2EDuration="7.569688779s" podCreationTimestamp="2026-01-27 14:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:10.565197216 +0000 UTC m=+1328.877547301" watchObservedRunningTime="2026-01-27 14:06:10.569688779 +0000 UTC m=+1328.882038864" Jan 27 14:06:10 crc kubenswrapper[4914]: I0127 14:06:10.619901 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:06:10 crc kubenswrapper[4914]: I0127 14:06:10.631025 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:10 crc kubenswrapper[4914]: I0127 14:06:10.848989 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75b4645c86-9r9q2" podUID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.647503 4914 generic.go:334] "Generic (PLEG): container finished" podID="e86ce2c8-20a7-4166-82ba-334e8463907b" containerID="95858ea340a1388ecfc38da9cdfd26f47293c7ef6623f13b5eaaa6ff84919e55" exitCode=0 Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.647903 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e86ce2c8-20a7-4166-82ba-334e8463907b","Type":"ContainerDied","Data":"95858ea340a1388ecfc38da9cdfd26f47293c7ef6623f13b5eaaa6ff84919e55"} Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.684067 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6eb8743b-d452-400b-b2ef-818c074597e6","Type":"ContainerStarted","Data":"0c81bb9393972715814f3d66c30d9c46eba45470402a67f8ec51bfa266726669"} Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.684398 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6eb8743b-d452-400b-b2ef-818c074597e6","Type":"ContainerStarted","Data":"231ae4288c24b4154a20dfd6a52bb0507c24b5e6da3c1585b653a24ba6813a54"} Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.717809 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0ab6ceb-e872-4ca0-b532-80b329aa647f","Type":"ContainerStarted","Data":"b78da211e6bd6f8da56ae7260c177c76c1c783c14ba4009cc2b1b2b551c765a6"} Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.717864 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.717888 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.832465 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.832720 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6740124e-468c-4527-af23-511164f5724e" containerName="glance-log" containerID="cri-o://ed266c4704b0c791690925a27eb19d92279caa1b06e68dbe9d32b89aa36acba1" gracePeriod=30 Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.833250 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6740124e-468c-4527-af23-511164f5724e" containerName="glance-httpd" containerID="cri-o://164c4727d96289944656185b7284840d313c02c08f0a5556e8af415193a67775" gracePeriod=30 Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.872465 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.935368 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-scripts\") pod \"e86ce2c8-20a7-4166-82ba-334e8463907b\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.935441 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-combined-ca-bundle\") pod \"e86ce2c8-20a7-4166-82ba-334e8463907b\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.935468 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-config-data-custom\") pod \"e86ce2c8-20a7-4166-82ba-334e8463907b\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.935519 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntrbb\" (UniqueName: \"kubernetes.io/projected/e86ce2c8-20a7-4166-82ba-334e8463907b-kube-api-access-ntrbb\") pod \"e86ce2c8-20a7-4166-82ba-334e8463907b\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.935615 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e86ce2c8-20a7-4166-82ba-334e8463907b-etc-machine-id\") pod \"e86ce2c8-20a7-4166-82ba-334e8463907b\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.935711 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-config-data\") pod \"e86ce2c8-20a7-4166-82ba-334e8463907b\" (UID: \"e86ce2c8-20a7-4166-82ba-334e8463907b\") " Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.940349 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e86ce2c8-20a7-4166-82ba-334e8463907b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e86ce2c8-20a7-4166-82ba-334e8463907b" (UID: "e86ce2c8-20a7-4166-82ba-334e8463907b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.960883 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-scripts" (OuterVolumeSpecName: "scripts") pod "e86ce2c8-20a7-4166-82ba-334e8463907b" (UID: "e86ce2c8-20a7-4166-82ba-334e8463907b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.961576 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e86ce2c8-20a7-4166-82ba-334e8463907b" (UID: "e86ce2c8-20a7-4166-82ba-334e8463907b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:11 crc kubenswrapper[4914]: I0127 14:06:11.961606 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e86ce2c8-20a7-4166-82ba-334e8463907b-kube-api-access-ntrbb" (OuterVolumeSpecName: "kube-api-access-ntrbb") pod "e86ce2c8-20a7-4166-82ba-334e8463907b" (UID: "e86ce2c8-20a7-4166-82ba-334e8463907b"). InnerVolumeSpecName "kube-api-access-ntrbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.003544 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e86ce2c8-20a7-4166-82ba-334e8463907b" (UID: "e86ce2c8-20a7-4166-82ba-334e8463907b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.038374 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.038544 4914 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.040048 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntrbb\" (UniqueName: \"kubernetes.io/projected/e86ce2c8-20a7-4166-82ba-334e8463907b-kube-api-access-ntrbb\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.040131 4914 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e86ce2c8-20a7-4166-82ba-334e8463907b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.040206 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.066087 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-config-data" (OuterVolumeSpecName: "config-data") pod "e86ce2c8-20a7-4166-82ba-334e8463907b" (UID: "e86ce2c8-20a7-4166-82ba-334e8463907b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.141584 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e86ce2c8-20a7-4166-82ba-334e8463907b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.693819 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.694134 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" containerName="glance-log" containerID="cri-o://81d80032b50d40fab5a80b2fe3567cafea96440532d38fe3006ed4cc175ff2c3" gracePeriod=30 Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.694307 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" containerName="glance-httpd" containerID="cri-o://5b66b6799d1684b59e953ac78f0ddaf825214a6473c31d9c596d47ca49d727f2" gracePeriod=30 Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.730545 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.731028 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e86ce2c8-20a7-4166-82ba-334e8463907b","Type":"ContainerDied","Data":"bfc182738a6855e1a3b4144352a83288e1ec6933d0b63463ab20a0a3921e5527"} Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.731526 4914 scope.go:117] "RemoveContainer" containerID="8819080a7f82c8524d4b0ea1edfc7d2d5f35b2b4e7832f2d91ce1733db12fbf4" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.733030 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6eb8743b-d452-400b-b2ef-818c074597e6","Type":"ContainerStarted","Data":"37984a78ba8edd0d3193f9f240672f726cad9169de1128320f3dc044b8b67730"} Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.733168 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.737102 4914 generic.go:334] "Generic (PLEG): container finished" podID="6740124e-468c-4527-af23-511164f5724e" containerID="ed266c4704b0c791690925a27eb19d92279caa1b06e68dbe9d32b89aa36acba1" exitCode=143 Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.737185 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6740124e-468c-4527-af23-511164f5724e","Type":"ContainerDied","Data":"ed266c4704b0c791690925a27eb19d92279caa1b06e68dbe9d32b89aa36acba1"} Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.739064 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0ab6ceb-e872-4ca0-b532-80b329aa647f","Type":"ContainerStarted","Data":"f509a2ccb63df203c72cfc2b297f74dd625b3fb6b2d58c5433422f0f6ebbe81e"} Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.753891 4914 scope.go:117] "RemoveContainer" containerID="95858ea340a1388ecfc38da9cdfd26f47293c7ef6623f13b5eaaa6ff84919e55" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.767104 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.76708608 podStartE2EDuration="3.76708608s" podCreationTimestamp="2026-01-27 14:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:12.76197237 +0000 UTC m=+1331.074322475" watchObservedRunningTime="2026-01-27 14:06:12.76708608 +0000 UTC m=+1331.079436165" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.785682 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.803421 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.818933 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:06:12 crc kubenswrapper[4914]: E0127 14:06:12.819314 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e86ce2c8-20a7-4166-82ba-334e8463907b" containerName="probe" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.819330 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="e86ce2c8-20a7-4166-82ba-334e8463907b" containerName="probe" Jan 27 14:06:12 crc kubenswrapper[4914]: E0127 14:06:12.819341 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e86ce2c8-20a7-4166-82ba-334e8463907b" containerName="cinder-scheduler" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.819349 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="e86ce2c8-20a7-4166-82ba-334e8463907b" containerName="cinder-scheduler" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.819525 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="e86ce2c8-20a7-4166-82ba-334e8463907b" containerName="probe" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.819550 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="e86ce2c8-20a7-4166-82ba-334e8463907b" containerName="cinder-scheduler" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.820483 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.822451 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.826765 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.963851 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49t8n\" (UniqueName: \"kubernetes.io/projected/fba39866-8924-4253-8bc6-e4c85fc9de31-kube-api-access-49t8n\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.963920 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fba39866-8924-4253-8bc6-e4c85fc9de31-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.963967 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fba39866-8924-4253-8bc6-e4c85fc9de31-config-data\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.963985 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fba39866-8924-4253-8bc6-e4c85fc9de31-scripts\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.964027 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba39866-8924-4253-8bc6-e4c85fc9de31-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:12 crc kubenswrapper[4914]: I0127 14:06:12.964081 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fba39866-8924-4253-8bc6-e4c85fc9de31-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.066383 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fba39866-8924-4253-8bc6-e4c85fc9de31-config-data\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.066437 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fba39866-8924-4253-8bc6-e4c85fc9de31-scripts\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.066500 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba39866-8924-4253-8bc6-e4c85fc9de31-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.066566 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fba39866-8924-4253-8bc6-e4c85fc9de31-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.066598 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49t8n\" (UniqueName: \"kubernetes.io/projected/fba39866-8924-4253-8bc6-e4c85fc9de31-kube-api-access-49t8n\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.066645 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fba39866-8924-4253-8bc6-e4c85fc9de31-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.066787 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fba39866-8924-4253-8bc6-e4c85fc9de31-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.072765 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fba39866-8924-4253-8bc6-e4c85fc9de31-scripts\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.072788 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fba39866-8924-4253-8bc6-e4c85fc9de31-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.073462 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba39866-8924-4253-8bc6-e4c85fc9de31-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.084751 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49t8n\" (UniqueName: \"kubernetes.io/projected/fba39866-8924-4253-8bc6-e4c85fc9de31-kube-api-access-49t8n\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.084863 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fba39866-8924-4253-8bc6-e4c85fc9de31-config-data\") pod \"cinder-scheduler-0\" (UID: \"fba39866-8924-4253-8bc6-e4c85fc9de31\") " pod="openstack/cinder-scheduler-0" Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.156167 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.671217 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.751037 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fba39866-8924-4253-8bc6-e4c85fc9de31","Type":"ContainerStarted","Data":"013f9809fe54740061d57f729ba02c3e66e40305a3332ba789ef75344fd6faca"} Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.753885 4914 generic.go:334] "Generic (PLEG): container finished" podID="ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" containerID="81d80032b50d40fab5a80b2fe3567cafea96440532d38fe3006ed4cc175ff2c3" exitCode=143 Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.753981 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4","Type":"ContainerDied","Data":"81d80032b50d40fab5a80b2fe3567cafea96440532d38fe3006ed4cc175ff2c3"} Jan 27 14:06:13 crc kubenswrapper[4914]: I0127 14:06:13.757373 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0ab6ceb-e872-4ca0-b532-80b329aa647f","Type":"ContainerStarted","Data":"8d8a73bc1f588632f4ea3372a217dc6d32b2a3802c7fba9764f7ad75dd5319a8"} Jan 27 14:06:14 crc kubenswrapper[4914]: I0127 14:06:14.075632 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:14 crc kubenswrapper[4914]: I0127 14:06:14.307549 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e86ce2c8-20a7-4166-82ba-334e8463907b" path="/var/lib/kubelet/pods/e86ce2c8-20a7-4166-82ba-334e8463907b/volumes" Jan 27 14:06:14 crc kubenswrapper[4914]: I0127 14:06:14.559332 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:14 crc kubenswrapper[4914]: I0127 14:06:14.769897 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fba39866-8924-4253-8bc6-e4c85fc9de31","Type":"ContainerStarted","Data":"2677787e75ae5b754dd2e0f45d78cdf0db5ac160bba991b671873deaa6e23036"} Jan 27 14:06:14 crc kubenswrapper[4914]: I0127 14:06:14.773158 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0ab6ceb-e872-4ca0-b532-80b329aa647f","Type":"ContainerStarted","Data":"6492d58038e86047e5dd3722ff0072e71f1fdad1a444011aebceab55538502d0"} Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.662353 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.668085 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.725959 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dd59938-4cf8-4632-8b1c-237cf981fd5f-logs\") pod \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.727229 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-horizon-tls-certs\") pod \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.729104 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpxjb\" (UniqueName: \"kubernetes.io/projected/1dd59938-4cf8-4632-8b1c-237cf981fd5f-kube-api-access-fpxjb\") pod \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.728764 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dd59938-4cf8-4632-8b1c-237cf981fd5f-logs" (OuterVolumeSpecName: "logs") pod "1dd59938-4cf8-4632-8b1c-237cf981fd5f" (UID: "1dd59938-4cf8-4632-8b1c-237cf981fd5f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.730150 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-combined-ca-bundle\") pod \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.730275 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-config-data\") pod \"6740124e-468c-4527-af23-511164f5724e\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.730391 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6740124e-468c-4527-af23-511164f5724e-httpd-run\") pod \"6740124e-468c-4527-af23-511164f5724e\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.730468 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-combined-ca-bundle\") pod \"6740124e-468c-4527-af23-511164f5724e\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.730557 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-public-tls-certs\") pod \"6740124e-468c-4527-af23-511164f5724e\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.730799 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1dd59938-4cf8-4632-8b1c-237cf981fd5f-config-data\") pod \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.730902 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1dd59938-4cf8-4632-8b1c-237cf981fd5f-scripts\") pod \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.730976 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"6740124e-468c-4527-af23-511164f5724e\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.731046 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-horizon-secret-key\") pod \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\" (UID: \"1dd59938-4cf8-4632-8b1c-237cf981fd5f\") " Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.731131 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6740124e-468c-4527-af23-511164f5724e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6740124e-468c-4527-af23-511164f5724e" (UID: "6740124e-468c-4527-af23-511164f5724e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.731401 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44nx8\" (UniqueName: \"kubernetes.io/projected/6740124e-468c-4527-af23-511164f5724e-kube-api-access-44nx8\") pod \"6740124e-468c-4527-af23-511164f5724e\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.731497 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-scripts\") pod \"6740124e-468c-4527-af23-511164f5724e\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.731644 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6740124e-468c-4527-af23-511164f5724e-logs\") pod \"6740124e-468c-4527-af23-511164f5724e\" (UID: \"6740124e-468c-4527-af23-511164f5724e\") " Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.732250 4914 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6740124e-468c-4527-af23-511164f5724e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.732313 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dd59938-4cf8-4632-8b1c-237cf981fd5f-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.733421 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6740124e-468c-4527-af23-511164f5724e-logs" (OuterVolumeSpecName: "logs") pod "6740124e-468c-4527-af23-511164f5724e" (UID: "6740124e-468c-4527-af23-511164f5724e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.737958 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1dd59938-4cf8-4632-8b1c-237cf981fd5f" (UID: "1dd59938-4cf8-4632-8b1c-237cf981fd5f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.742063 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd59938-4cf8-4632-8b1c-237cf981fd5f-kube-api-access-fpxjb" (OuterVolumeSpecName: "kube-api-access-fpxjb") pod "1dd59938-4cf8-4632-8b1c-237cf981fd5f" (UID: "1dd59938-4cf8-4632-8b1c-237cf981fd5f"). InnerVolumeSpecName "kube-api-access-fpxjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.744097 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-scripts" (OuterVolumeSpecName: "scripts") pod "6740124e-468c-4527-af23-511164f5724e" (UID: "6740124e-468c-4527-af23-511164f5724e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.744206 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6740124e-468c-4527-af23-511164f5724e-kube-api-access-44nx8" (OuterVolumeSpecName: "kube-api-access-44nx8") pod "6740124e-468c-4527-af23-511164f5724e" (UID: "6740124e-468c-4527-af23-511164f5724e"). InnerVolumeSpecName "kube-api-access-44nx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.746034 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "6740124e-468c-4527-af23-511164f5724e" (UID: "6740124e-468c-4527-af23-511164f5724e"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.791089 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6740124e-468c-4527-af23-511164f5724e" (UID: "6740124e-468c-4527-af23-511164f5724e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.821395 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd59938-4cf8-4632-8b1c-237cf981fd5f-scripts" (OuterVolumeSpecName: "scripts") pod "1dd59938-4cf8-4632-8b1c-237cf981fd5f" (UID: "1dd59938-4cf8-4632-8b1c-237cf981fd5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.832877 4914 generic.go:334] "Generic (PLEG): container finished" podID="6740124e-468c-4527-af23-511164f5724e" containerID="164c4727d96289944656185b7284840d313c02c08f0a5556e8af415193a67775" exitCode=0 Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.832964 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6740124e-468c-4527-af23-511164f5724e","Type":"ContainerDied","Data":"164c4727d96289944656185b7284840d313c02c08f0a5556e8af415193a67775"} Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.832996 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6740124e-468c-4527-af23-511164f5724e","Type":"ContainerDied","Data":"73d1f7b3615ab9f919d43b7987dd286ea16dd5ad0dbf7060458a7a75be0f0fbf"} Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.833014 4914 scope.go:117] "RemoveContainer" containerID="164c4727d96289944656185b7284840d313c02c08f0a5556e8af415193a67775" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.833184 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.836070 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44nx8\" (UniqueName: \"kubernetes.io/projected/6740124e-468c-4527-af23-511164f5724e-kube-api-access-44nx8\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.836100 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.836112 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6740124e-468c-4527-af23-511164f5724e-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.836124 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpxjb\" (UniqueName: \"kubernetes.io/projected/1dd59938-4cf8-4632-8b1c-237cf981fd5f-kube-api-access-fpxjb\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.836136 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.836146 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1dd59938-4cf8-4632-8b1c-237cf981fd5f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.836171 4914 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.836183 4914 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.848932 4914 generic.go:334] "Generic (PLEG): container finished" podID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" containerID="ad832d0feb989590a5314d3754c43edf28a65278eb9fabcffc6d095749855113" exitCode=137 Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.849103 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75b4645c86-9r9q2" event={"ID":"1dd59938-4cf8-4632-8b1c-237cf981fd5f","Type":"ContainerDied","Data":"ad832d0feb989590a5314d3754c43edf28a65278eb9fabcffc6d095749855113"} Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.850255 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75b4645c86-9r9q2" event={"ID":"1dd59938-4cf8-4632-8b1c-237cf981fd5f","Type":"ContainerDied","Data":"5e6ef53ceac01b8275f3bd37ee4e243e76f0850f34cdc7b8d4a9b792d5703931"} Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.850476 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75b4645c86-9r9q2" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.859655 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd59938-4cf8-4632-8b1c-237cf981fd5f-config-data" (OuterVolumeSpecName: "config-data") pod "1dd59938-4cf8-4632-8b1c-237cf981fd5f" (UID: "1dd59938-4cf8-4632-8b1c-237cf981fd5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.870676 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fba39866-8924-4253-8bc6-e4c85fc9de31","Type":"ContainerStarted","Data":"c4b56069276b50823ace921989f938444afdc02d6684e40582b5e261fe58f64e"} Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.874479 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.153:9292/healthcheck\": read tcp 10.217.0.2:46024->10.217.0.153:9292: read: connection reset by peer" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.876225 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.153:9292/healthcheck\": read tcp 10.217.0.2:46008->10.217.0.153:9292: read: connection reset by peer" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.884444 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dd59938-4cf8-4632-8b1c-237cf981fd5f" (UID: "1dd59938-4cf8-4632-8b1c-237cf981fd5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.885735 4914 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.903523 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.9035016860000002 podStartE2EDuration="3.903501686s" podCreationTimestamp="2026-01-27 14:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:15.890116169 +0000 UTC m=+1334.202466274" watchObservedRunningTime="2026-01-27 14:06:15.903501686 +0000 UTC m=+1334.215851771" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.915227 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-config-data" (OuterVolumeSpecName: "config-data") pod "6740124e-468c-4527-af23-511164f5724e" (UID: "6740124e-468c-4527-af23-511164f5724e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.936929 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "1dd59938-4cf8-4632-8b1c-237cf981fd5f" (UID: "1dd59938-4cf8-4632-8b1c-237cf981fd5f"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.937699 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6740124e-468c-4527-af23-511164f5724e" (UID: "6740124e-468c-4527-af23-511164f5724e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.950033 4914 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.950058 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dd59938-4cf8-4632-8b1c-237cf981fd5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.950069 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.950078 4914 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6740124e-468c-4527-af23-511164f5724e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.950086 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1dd59938-4cf8-4632-8b1c-237cf981fd5f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:15 crc kubenswrapper[4914]: I0127 14:06:15.950096 4914 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:15.999981 4914 scope.go:117] "RemoveContainer" containerID="ed266c4704b0c791690925a27eb19d92279caa1b06e68dbe9d32b89aa36acba1" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.037821 4914 scope.go:117] "RemoveContainer" containerID="164c4727d96289944656185b7284840d313c02c08f0a5556e8af415193a67775" Jan 27 14:06:16 crc kubenswrapper[4914]: E0127 14:06:16.038420 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"164c4727d96289944656185b7284840d313c02c08f0a5556e8af415193a67775\": container with ID starting with 164c4727d96289944656185b7284840d313c02c08f0a5556e8af415193a67775 not found: ID does not exist" containerID="164c4727d96289944656185b7284840d313c02c08f0a5556e8af415193a67775" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.038461 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"164c4727d96289944656185b7284840d313c02c08f0a5556e8af415193a67775"} err="failed to get container status \"164c4727d96289944656185b7284840d313c02c08f0a5556e8af415193a67775\": rpc error: code = NotFound desc = could not find container \"164c4727d96289944656185b7284840d313c02c08f0a5556e8af415193a67775\": container with ID starting with 164c4727d96289944656185b7284840d313c02c08f0a5556e8af415193a67775 not found: ID does not exist" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.038489 4914 scope.go:117] "RemoveContainer" containerID="ed266c4704b0c791690925a27eb19d92279caa1b06e68dbe9d32b89aa36acba1" Jan 27 14:06:16 crc kubenswrapper[4914]: E0127 14:06:16.039816 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed266c4704b0c791690925a27eb19d92279caa1b06e68dbe9d32b89aa36acba1\": container with ID starting with ed266c4704b0c791690925a27eb19d92279caa1b06e68dbe9d32b89aa36acba1 not found: ID does not exist" containerID="ed266c4704b0c791690925a27eb19d92279caa1b06e68dbe9d32b89aa36acba1" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.039876 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed266c4704b0c791690925a27eb19d92279caa1b06e68dbe9d32b89aa36acba1"} err="failed to get container status \"ed266c4704b0c791690925a27eb19d92279caa1b06e68dbe9d32b89aa36acba1\": rpc error: code = NotFound desc = could not find container \"ed266c4704b0c791690925a27eb19d92279caa1b06e68dbe9d32b89aa36acba1\": container with ID starting with ed266c4704b0c791690925a27eb19d92279caa1b06e68dbe9d32b89aa36acba1 not found: ID does not exist" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.039891 4914 scope.go:117] "RemoveContainer" containerID="9b6322b0157303af4f92615415e427acec270b33985cd6111ea50a4b30ddc85c" Jan 27 14:06:16 crc kubenswrapper[4914]: E0127 14:06:16.132260 4914 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podece67619_8ef7_4c3f_ba5a_36fcc1f05fe4.slice/crio-conmon-81d80032b50d40fab5a80b2fe3567cafea96440532d38fe3006ed4cc175ff2c3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podece67619_8ef7_4c3f_ba5a_36fcc1f05fe4.slice/crio-5b66b6799d1684b59e953ac78f0ddaf825214a6473c31d9c596d47ca49d727f2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode86ce2c8_20a7_4166_82ba_334e8463907b.slice/crio-bfc182738a6855e1a3b4144352a83288e1ec6933d0b63463ab20a0a3921e5527\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6740124e_468c_4527_af23_511164f5724e.slice/crio-conmon-ed266c4704b0c791690925a27eb19d92279caa1b06e68dbe9d32b89aa36acba1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podece67619_8ef7_4c3f_ba5a_36fcc1f05fe4.slice/crio-conmon-5b66b6799d1684b59e953ac78f0ddaf825214a6473c31d9c596d47ca49d727f2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode86ce2c8_20a7_4166_82ba_334e8463907b.slice/crio-95858ea340a1388ecfc38da9cdfd26f47293c7ef6623f13b5eaaa6ff84919e55.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6740124e_468c_4527_af23_511164f5724e.slice/crio-164c4727d96289944656185b7284840d313c02c08f0a5556e8af415193a67775.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dd59938_4cf8_4632_8b1c_237cf981fd5f.slice/crio-ad832d0feb989590a5314d3754c43edf28a65278eb9fabcffc6d095749855113.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podece67619_8ef7_4c3f_ba5a_36fcc1f05fe4.slice/crio-81d80032b50d40fab5a80b2fe3567cafea96440532d38fe3006ed4cc175ff2c3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dd59938_4cf8_4632_8b1c_237cf981fd5f.slice/crio-conmon-ad832d0feb989590a5314d3754c43edf28a65278eb9fabcffc6d095749855113.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode86ce2c8_20a7_4166_82ba_334e8463907b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6740124e_468c_4527_af23_511164f5724e.slice/crio-conmon-164c4727d96289944656185b7284840d313c02c08f0a5556e8af415193a67775.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6740124e_468c_4527_af23_511164f5724e.slice/crio-ed266c4704b0c791690925a27eb19d92279caa1b06e68dbe9d32b89aa36acba1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode86ce2c8_20a7_4166_82ba_334e8463907b.slice/crio-conmon-95858ea340a1388ecfc38da9cdfd26f47293c7ef6623f13b5eaaa6ff84919e55.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.185795 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.215207 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.236877 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:06:16 crc kubenswrapper[4914]: E0127 14:06:16.237279 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6740124e-468c-4527-af23-511164f5724e" containerName="glance-httpd" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.237295 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="6740124e-468c-4527-af23-511164f5724e" containerName="glance-httpd" Jan 27 14:06:16 crc kubenswrapper[4914]: E0127 14:06:16.237305 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" containerName="horizon" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.237312 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" containerName="horizon" Jan 27 14:06:16 crc kubenswrapper[4914]: E0127 14:06:16.237331 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6740124e-468c-4527-af23-511164f5724e" containerName="glance-log" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.237337 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="6740124e-468c-4527-af23-511164f5724e" containerName="glance-log" Jan 27 14:06:16 crc kubenswrapper[4914]: E0127 14:06:16.237352 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" containerName="horizon-log" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.237358 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" containerName="horizon-log" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.237502 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" containerName="horizon" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.237519 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" containerName="horizon-log" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.237528 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="6740124e-468c-4527-af23-511164f5724e" containerName="glance-log" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.237542 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="6740124e-468c-4527-af23-511164f5724e" containerName="glance-httpd" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.238397 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.242657 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.242906 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.304083 4914 scope.go:117] "RemoveContainer" containerID="ad832d0feb989590a5314d3754c43edf28a65278eb9fabcffc6d095749855113" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.322950 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6740124e-468c-4527-af23-511164f5724e" path="/var/lib/kubelet/pods/6740124e-468c-4527-af23-511164f5724e/volumes" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.323659 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75b4645c86-9r9q2"] Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.362354 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2330c3f1-da78-4cc1-a16a-856037a1f395-logs\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.362722 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2330c3f1-da78-4cc1-a16a-856037a1f395-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.362756 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2330c3f1-da78-4cc1-a16a-856037a1f395-scripts\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.362852 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2330c3f1-da78-4cc1-a16a-856037a1f395-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.362930 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2330c3f1-da78-4cc1-a16a-856037a1f395-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.362947 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z8qn\" (UniqueName: \"kubernetes.io/projected/2330c3f1-da78-4cc1-a16a-856037a1f395-kube-api-access-8z8qn\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.363208 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75b4645c86-9r9q2"] Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.363283 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2330c3f1-da78-4cc1-a16a-856037a1f395-config-data\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.364214 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.378462 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.383853 4914 scope.go:117] "RemoveContainer" containerID="9b6322b0157303af4f92615415e427acec270b33985cd6111ea50a4b30ddc85c" Jan 27 14:06:16 crc kubenswrapper[4914]: E0127 14:06:16.385182 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b6322b0157303af4f92615415e427acec270b33985cd6111ea50a4b30ddc85c\": container with ID starting with 9b6322b0157303af4f92615415e427acec270b33985cd6111ea50a4b30ddc85c not found: ID does not exist" containerID="9b6322b0157303af4f92615415e427acec270b33985cd6111ea50a4b30ddc85c" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.385223 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b6322b0157303af4f92615415e427acec270b33985cd6111ea50a4b30ddc85c"} err="failed to get container status \"9b6322b0157303af4f92615415e427acec270b33985cd6111ea50a4b30ddc85c\": rpc error: code = NotFound desc = could not find container \"9b6322b0157303af4f92615415e427acec270b33985cd6111ea50a4b30ddc85c\": container with ID starting with 9b6322b0157303af4f92615415e427acec270b33985cd6111ea50a4b30ddc85c not found: ID does not exist" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.385249 4914 scope.go:117] "RemoveContainer" containerID="ad832d0feb989590a5314d3754c43edf28a65278eb9fabcffc6d095749855113" Jan 27 14:06:16 crc kubenswrapper[4914]: E0127 14:06:16.385588 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad832d0feb989590a5314d3754c43edf28a65278eb9fabcffc6d095749855113\": container with ID starting with ad832d0feb989590a5314d3754c43edf28a65278eb9fabcffc6d095749855113 not found: ID does not exist" containerID="ad832d0feb989590a5314d3754c43edf28a65278eb9fabcffc6d095749855113" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.393885 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad832d0feb989590a5314d3754c43edf28a65278eb9fabcffc6d095749855113"} err="failed to get container status \"ad832d0feb989590a5314d3754c43edf28a65278eb9fabcffc6d095749855113\": rpc error: code = NotFound desc = could not find container \"ad832d0feb989590a5314d3754c43edf28a65278eb9fabcffc6d095749855113\": container with ID starting with ad832d0feb989590a5314d3754c43edf28a65278eb9fabcffc6d095749855113 not found: ID does not exist" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.403704 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.465557 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-internal-tls-certs\") pod \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.465609 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-scripts\") pod \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.465637 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-logs\") pod \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.465722 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-config-data\") pod \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.465781 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp7pw\" (UniqueName: \"kubernetes.io/projected/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-kube-api-access-cp7pw\") pod \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.465851 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-httpd-run\") pod \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.465890 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-combined-ca-bundle\") pod \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.465925 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\" (UID: \"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4\") " Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.466108 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2330c3f1-da78-4cc1-a16a-856037a1f395-scripts\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.466176 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2330c3f1-da78-4cc1-a16a-856037a1f395-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.466236 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2330c3f1-da78-4cc1-a16a-856037a1f395-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.466258 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z8qn\" (UniqueName: \"kubernetes.io/projected/2330c3f1-da78-4cc1-a16a-856037a1f395-kube-api-access-8z8qn\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.466324 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2330c3f1-da78-4cc1-a16a-856037a1f395-config-data\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.466356 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.466379 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2330c3f1-da78-4cc1-a16a-856037a1f395-logs\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.466408 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2330c3f1-da78-4cc1-a16a-856037a1f395-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.469348 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.471443 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2330c3f1-da78-4cc1-a16a-856037a1f395-logs\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.475153 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2330c3f1-da78-4cc1-a16a-856037a1f395-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.475380 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" (UID: "ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.475948 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-logs" (OuterVolumeSpecName: "logs") pod "ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" (UID: "ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.476190 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2330c3f1-da78-4cc1-a16a-856037a1f395-config-data\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.476270 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2330c3f1-da78-4cc1-a16a-856037a1f395-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.476294 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" (UID: "ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.480515 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2330c3f1-da78-4cc1-a16a-856037a1f395-scripts\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.480677 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-kube-api-access-cp7pw" (OuterVolumeSpecName: "kube-api-access-cp7pw") pod "ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" (UID: "ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4"). InnerVolumeSpecName "kube-api-access-cp7pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.481286 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2330c3f1-da78-4cc1-a16a-856037a1f395-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.489597 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-scripts" (OuterVolumeSpecName: "scripts") pod "ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" (UID: "ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.502392 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z8qn\" (UniqueName: \"kubernetes.io/projected/2330c3f1-da78-4cc1-a16a-856037a1f395-kube-api-access-8z8qn\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.548244 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"2330c3f1-da78-4cc1-a16a-856037a1f395\") " pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.568922 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp7pw\" (UniqueName: \"kubernetes.io/projected/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-kube-api-access-cp7pw\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.569293 4914 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.569445 4914 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.569534 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.569629 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.576264 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.578020 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" (UID: "ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.578139 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" (UID: "ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.601946 4914 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.604546 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-config-data" (OuterVolumeSpecName: "config-data") pod "ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" (UID: "ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.670910 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.670948 4914 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.670959 4914 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.670967 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.736503 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-4zv8f"] Jan 27 14:06:16 crc kubenswrapper[4914]: E0127 14:06:16.745409 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" containerName="glance-httpd" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.745443 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" containerName="glance-httpd" Jan 27 14:06:16 crc kubenswrapper[4914]: E0127 14:06:16.745468 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" containerName="glance-log" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.745474 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" containerName="glance-log" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.745703 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" containerName="glance-httpd" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.745715 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" containerName="glance-log" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.746552 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-4zv8f" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.765881 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-4zv8f"] Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.842366 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-ncrjl"] Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.843851 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ncrjl" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.850747 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ncrjl"] Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.874497 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8921e83-a3ec-4c05-9501-47e07d28a3ac-operator-scripts\") pod \"nova-cell0-db-create-ncrjl\" (UID: \"d8921e83-a3ec-4c05-9501-47e07d28a3ac\") " pod="openstack/nova-cell0-db-create-ncrjl" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.874554 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwbs6\" (UniqueName: \"kubernetes.io/projected/30de17e6-b0bd-4549-b794-052a3b6c9d84-kube-api-access-gwbs6\") pod \"nova-api-db-create-4zv8f\" (UID: \"30de17e6-b0bd-4549-b794-052a3b6c9d84\") " pod="openstack/nova-api-db-create-4zv8f" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.874591 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w274p\" (UniqueName: \"kubernetes.io/projected/d8921e83-a3ec-4c05-9501-47e07d28a3ac-kube-api-access-w274p\") pod \"nova-cell0-db-create-ncrjl\" (UID: \"d8921e83-a3ec-4c05-9501-47e07d28a3ac\") " pod="openstack/nova-cell0-db-create-ncrjl" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.874613 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30de17e6-b0bd-4549-b794-052a3b6c9d84-operator-scripts\") pod \"nova-api-db-create-4zv8f\" (UID: \"30de17e6-b0bd-4549-b794-052a3b6c9d84\") " pod="openstack/nova-api-db-create-4zv8f" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.896773 4914 generic.go:334] "Generic (PLEG): container finished" podID="ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" containerID="5b66b6799d1684b59e953ac78f0ddaf825214a6473c31d9c596d47ca49d727f2" exitCode=0 Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.896890 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.896886 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4","Type":"ContainerDied","Data":"5b66b6799d1684b59e953ac78f0ddaf825214a6473c31d9c596d47ca49d727f2"} Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.896934 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4","Type":"ContainerDied","Data":"811ec6b00a391f773d7175623c9934fd9641d75cb808a091fc4cf1d782c0f148"} Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.896950 4914 scope.go:117] "RemoveContainer" containerID="5b66b6799d1684b59e953ac78f0ddaf825214a6473c31d9c596d47ca49d727f2" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.908451 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerName="ceilometer-central-agent" containerID="cri-o://f509a2ccb63df203c72cfc2b297f74dd625b3fb6b2d58c5433422f0f6ebbe81e" gracePeriod=30 Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.908733 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0ab6ceb-e872-4ca0-b532-80b329aa647f","Type":"ContainerStarted","Data":"9e3abfcfbbdbd34795207b9cc8c3ffaf62281e5ff38250ee448c4dff8bf30921"} Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.908778 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.909031 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerName="proxy-httpd" containerID="cri-o://9e3abfcfbbdbd34795207b9cc8c3ffaf62281e5ff38250ee448c4dff8bf30921" gracePeriod=30 Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.909076 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerName="sg-core" containerID="cri-o://6492d58038e86047e5dd3722ff0072e71f1fdad1a444011aebceab55538502d0" gracePeriod=30 Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.909107 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerName="ceilometer-notification-agent" containerID="cri-o://8d8a73bc1f588632f4ea3372a217dc6d32b2a3802c7fba9764f7ad75dd5319a8" gracePeriod=30 Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.959022 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9dea-account-create-update-phmgm"] Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.961116 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9dea-account-create-update-phmgm" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.964337 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.971858 4914 scope.go:117] "RemoveContainer" containerID="81d80032b50d40fab5a80b2fe3567cafea96440532d38fe3006ed4cc175ff2c3" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.976333 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8921e83-a3ec-4c05-9501-47e07d28a3ac-operator-scripts\") pod \"nova-cell0-db-create-ncrjl\" (UID: \"d8921e83-a3ec-4c05-9501-47e07d28a3ac\") " pod="openstack/nova-cell0-db-create-ncrjl" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.976574 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f745b4c-d50b-4e67-902d-ea60fedda7dc-operator-scripts\") pod \"nova-api-9dea-account-create-update-phmgm\" (UID: \"9f745b4c-d50b-4e67-902d-ea60fedda7dc\") " pod="openstack/nova-api-9dea-account-create-update-phmgm" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.976711 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwbs6\" (UniqueName: \"kubernetes.io/projected/30de17e6-b0bd-4549-b794-052a3b6c9d84-kube-api-access-gwbs6\") pod \"nova-api-db-create-4zv8f\" (UID: \"30de17e6-b0bd-4549-b794-052a3b6c9d84\") " pod="openstack/nova-api-db-create-4zv8f" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.976861 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r87n7\" (UniqueName: \"kubernetes.io/projected/9f745b4c-d50b-4e67-902d-ea60fedda7dc-kube-api-access-r87n7\") pod \"nova-api-9dea-account-create-update-phmgm\" (UID: \"9f745b4c-d50b-4e67-902d-ea60fedda7dc\") " pod="openstack/nova-api-9dea-account-create-update-phmgm" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.976956 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w274p\" (UniqueName: \"kubernetes.io/projected/d8921e83-a3ec-4c05-9501-47e07d28a3ac-kube-api-access-w274p\") pod \"nova-cell0-db-create-ncrjl\" (UID: \"d8921e83-a3ec-4c05-9501-47e07d28a3ac\") " pod="openstack/nova-cell0-db-create-ncrjl" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.977055 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30de17e6-b0bd-4549-b794-052a3b6c9d84-operator-scripts\") pod \"nova-api-db-create-4zv8f\" (UID: \"30de17e6-b0bd-4549-b794-052a3b6c9d84\") " pod="openstack/nova-api-db-create-4zv8f" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.977633 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.920385086 podStartE2EDuration="7.977609153s" podCreationTimestamp="2026-01-27 14:06:09 +0000 UTC" firstStartedPulling="2026-01-27 14:06:10.633663452 +0000 UTC m=+1328.946013537" lastFinishedPulling="2026-01-27 14:06:15.690887519 +0000 UTC m=+1334.003237604" observedRunningTime="2026-01-27 14:06:16.948610818 +0000 UTC m=+1335.260960913" watchObservedRunningTime="2026-01-27 14:06:16.977609153 +0000 UTC m=+1335.289959248" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.978710 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8921e83-a3ec-4c05-9501-47e07d28a3ac-operator-scripts\") pod \"nova-cell0-db-create-ncrjl\" (UID: \"d8921e83-a3ec-4c05-9501-47e07d28a3ac\") " pod="openstack/nova-cell0-db-create-ncrjl" Jan 27 14:06:16 crc kubenswrapper[4914]: I0127 14:06:16.980744 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30de17e6-b0bd-4549-b794-052a3b6c9d84-operator-scripts\") pod \"nova-api-db-create-4zv8f\" (UID: \"30de17e6-b0bd-4549-b794-052a3b6c9d84\") " pod="openstack/nova-api-db-create-4zv8f" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.004114 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9dea-account-create-update-phmgm"] Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.015393 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w274p\" (UniqueName: \"kubernetes.io/projected/d8921e83-a3ec-4c05-9501-47e07d28a3ac-kube-api-access-w274p\") pod \"nova-cell0-db-create-ncrjl\" (UID: \"d8921e83-a3ec-4c05-9501-47e07d28a3ac\") " pod="openstack/nova-cell0-db-create-ncrjl" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.035164 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwbs6\" (UniqueName: \"kubernetes.io/projected/30de17e6-b0bd-4549-b794-052a3b6c9d84-kube-api-access-gwbs6\") pod \"nova-api-db-create-4zv8f\" (UID: \"30de17e6-b0bd-4549-b794-052a3b6c9d84\") " pod="openstack/nova-api-db-create-4zv8f" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.050740 4914 scope.go:117] "RemoveContainer" containerID="5b66b6799d1684b59e953ac78f0ddaf825214a6473c31d9c596d47ca49d727f2" Jan 27 14:06:17 crc kubenswrapper[4914]: E0127 14:06:17.055093 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b66b6799d1684b59e953ac78f0ddaf825214a6473c31d9c596d47ca49d727f2\": container with ID starting with 5b66b6799d1684b59e953ac78f0ddaf825214a6473c31d9c596d47ca49d727f2 not found: ID does not exist" containerID="5b66b6799d1684b59e953ac78f0ddaf825214a6473c31d9c596d47ca49d727f2" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.055551 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b66b6799d1684b59e953ac78f0ddaf825214a6473c31d9c596d47ca49d727f2"} err="failed to get container status \"5b66b6799d1684b59e953ac78f0ddaf825214a6473c31d9c596d47ca49d727f2\": rpc error: code = NotFound desc = could not find container \"5b66b6799d1684b59e953ac78f0ddaf825214a6473c31d9c596d47ca49d727f2\": container with ID starting with 5b66b6799d1684b59e953ac78f0ddaf825214a6473c31d9c596d47ca49d727f2 not found: ID does not exist" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.055640 4914 scope.go:117] "RemoveContainer" containerID="81d80032b50d40fab5a80b2fe3567cafea96440532d38fe3006ed4cc175ff2c3" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.055328 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:06:17 crc kubenswrapper[4914]: E0127 14:06:17.062885 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81d80032b50d40fab5a80b2fe3567cafea96440532d38fe3006ed4cc175ff2c3\": container with ID starting with 81d80032b50d40fab5a80b2fe3567cafea96440532d38fe3006ed4cc175ff2c3 not found: ID does not exist" containerID="81d80032b50d40fab5a80b2fe3567cafea96440532d38fe3006ed4cc175ff2c3" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.062998 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81d80032b50d40fab5a80b2fe3567cafea96440532d38fe3006ed4cc175ff2c3"} err="failed to get container status \"81d80032b50d40fab5a80b2fe3567cafea96440532d38fe3006ed4cc175ff2c3\": rpc error: code = NotFound desc = could not find container \"81d80032b50d40fab5a80b2fe3567cafea96440532d38fe3006ed4cc175ff2c3\": container with ID starting with 81d80032b50d40fab5a80b2fe3567cafea96440532d38fe3006ed4cc175ff2c3 not found: ID does not exist" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.080213 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f745b4c-d50b-4e67-902d-ea60fedda7dc-operator-scripts\") pod \"nova-api-9dea-account-create-update-phmgm\" (UID: \"9f745b4c-d50b-4e67-902d-ea60fedda7dc\") " pod="openstack/nova-api-9dea-account-create-update-phmgm" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.080301 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r87n7\" (UniqueName: \"kubernetes.io/projected/9f745b4c-d50b-4e67-902d-ea60fedda7dc-kube-api-access-r87n7\") pod \"nova-api-9dea-account-create-update-phmgm\" (UID: \"9f745b4c-d50b-4e67-902d-ea60fedda7dc\") " pod="openstack/nova-api-9dea-account-create-update-phmgm" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.083572 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f745b4c-d50b-4e67-902d-ea60fedda7dc-operator-scripts\") pod \"nova-api-9dea-account-create-update-phmgm\" (UID: \"9f745b4c-d50b-4e67-902d-ea60fedda7dc\") " pod="openstack/nova-api-9dea-account-create-update-phmgm" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.088114 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.093331 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-4zv8f" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.107858 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r87n7\" (UniqueName: \"kubernetes.io/projected/9f745b4c-d50b-4e67-902d-ea60fedda7dc-kube-api-access-r87n7\") pod \"nova-api-9dea-account-create-update-phmgm\" (UID: \"9f745b4c-d50b-4e67-902d-ea60fedda7dc\") " pod="openstack/nova-api-9dea-account-create-update-phmgm" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.109339 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.110821 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.117780 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.118141 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.128705 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-smbvn"] Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.131979 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-smbvn" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.148878 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.164259 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-smbvn"] Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.165017 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ncrjl" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.192913 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-6020-account-create-update-p7x99"] Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.194410 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6020-account-create-update-p7x99" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.199323 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.206263 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6020-account-create-update-p7x99"] Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.294127 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnht9\" (UniqueName: \"kubernetes.io/projected/a9c87355-3782-4f33-8e73-14293d16499d-kube-api-access-cnht9\") pod \"nova-cell1-db-create-smbvn\" (UID: \"a9c87355-3782-4f33-8e73-14293d16499d\") " pod="openstack/nova-cell1-db-create-smbvn" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.294674 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9dea-account-create-update-phmgm" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.294754 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff464548-5e9c-4d46-a547-7d0cdd949883-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.295001 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c87355-3782-4f33-8e73-14293d16499d-operator-scripts\") pod \"nova-cell1-db-create-smbvn\" (UID: \"a9c87355-3782-4f33-8e73-14293d16499d\") " pod="openstack/nova-cell1-db-create-smbvn" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.295509 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff464548-5e9c-4d46-a547-7d0cdd949883-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.295586 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff464548-5e9c-4d46-a547-7d0cdd949883-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.295620 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.295673 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff464548-5e9c-4d46-a547-7d0cdd949883-logs\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.295715 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff464548-5e9c-4d46-a547-7d0cdd949883-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.295761 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfrm7\" (UniqueName: \"kubernetes.io/projected/ff464548-5e9c-4d46-a547-7d0cdd949883-kube-api-access-zfrm7\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.295796 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff464548-5e9c-4d46-a547-7d0cdd949883-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.351697 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.380009 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3d52-account-create-update-9v8gv"] Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.384007 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3d52-account-create-update-9v8gv" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.386912 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.400135 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3d52-account-create-update-9v8gv"] Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.401512 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnht9\" (UniqueName: \"kubernetes.io/projected/a9c87355-3782-4f33-8e73-14293d16499d-kube-api-access-cnht9\") pod \"nova-cell1-db-create-smbvn\" (UID: \"a9c87355-3782-4f33-8e73-14293d16499d\") " pod="openstack/nova-cell1-db-create-smbvn" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.401685 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff464548-5e9c-4d46-a547-7d0cdd949883-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.402216 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c87355-3782-4f33-8e73-14293d16499d-operator-scripts\") pod \"nova-cell1-db-create-smbvn\" (UID: \"a9c87355-3782-4f33-8e73-14293d16499d\") " pod="openstack/nova-cell1-db-create-smbvn" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.402322 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff464548-5e9c-4d46-a547-7d0cdd949883-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.402387 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff464548-5e9c-4d46-a547-7d0cdd949883-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.402422 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.402799 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvwwf\" (UniqueName: \"kubernetes.io/projected/8d6bf34b-de01-4687-8288-6c652539bbd2-kube-api-access-lvwwf\") pod \"nova-cell0-6020-account-create-update-p7x99\" (UID: \"8d6bf34b-de01-4687-8288-6c652539bbd2\") " pod="openstack/nova-cell0-6020-account-create-update-p7x99" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.405054 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff464548-5e9c-4d46-a547-7d0cdd949883-logs\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.405135 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff464548-5e9c-4d46-a547-7d0cdd949883-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.405169 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d6bf34b-de01-4687-8288-6c652539bbd2-operator-scripts\") pod \"nova-cell0-6020-account-create-update-p7x99\" (UID: \"8d6bf34b-de01-4687-8288-6c652539bbd2\") " pod="openstack/nova-cell0-6020-account-create-update-p7x99" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.405216 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfrm7\" (UniqueName: \"kubernetes.io/projected/ff464548-5e9c-4d46-a547-7d0cdd949883-kube-api-access-zfrm7\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.405257 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff464548-5e9c-4d46-a547-7d0cdd949883-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.405892 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c87355-3782-4f33-8e73-14293d16499d-operator-scripts\") pod \"nova-cell1-db-create-smbvn\" (UID: \"a9c87355-3782-4f33-8e73-14293d16499d\") " pod="openstack/nova-cell1-db-create-smbvn" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.406339 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff464548-5e9c-4d46-a547-7d0cdd949883-logs\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.406684 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff464548-5e9c-4d46-a547-7d0cdd949883-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.407030 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.408535 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff464548-5e9c-4d46-a547-7d0cdd949883-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.409517 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff464548-5e9c-4d46-a547-7d0cdd949883-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.411638 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff464548-5e9c-4d46-a547-7d0cdd949883-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.422064 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff464548-5e9c-4d46-a547-7d0cdd949883-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.428625 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnht9\" (UniqueName: \"kubernetes.io/projected/a9c87355-3782-4f33-8e73-14293d16499d-kube-api-access-cnht9\") pod \"nova-cell1-db-create-smbvn\" (UID: \"a9c87355-3782-4f33-8e73-14293d16499d\") " pod="openstack/nova-cell1-db-create-smbvn" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.432427 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfrm7\" (UniqueName: \"kubernetes.io/projected/ff464548-5e9c-4d46-a547-7d0cdd949883-kube-api-access-zfrm7\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.493822 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ff464548-5e9c-4d46-a547-7d0cdd949883\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.506923 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dc9fc14-27f3-42dd-b037-39b461aa19f1-operator-scripts\") pod \"nova-cell1-3d52-account-create-update-9v8gv\" (UID: \"8dc9fc14-27f3-42dd-b037-39b461aa19f1\") " pod="openstack/nova-cell1-3d52-account-create-update-9v8gv" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.507242 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkzks\" (UniqueName: \"kubernetes.io/projected/8dc9fc14-27f3-42dd-b037-39b461aa19f1-kube-api-access-dkzks\") pod \"nova-cell1-3d52-account-create-update-9v8gv\" (UID: \"8dc9fc14-27f3-42dd-b037-39b461aa19f1\") " pod="openstack/nova-cell1-3d52-account-create-update-9v8gv" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.507464 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvwwf\" (UniqueName: \"kubernetes.io/projected/8d6bf34b-de01-4687-8288-6c652539bbd2-kube-api-access-lvwwf\") pod \"nova-cell0-6020-account-create-update-p7x99\" (UID: \"8d6bf34b-de01-4687-8288-6c652539bbd2\") " pod="openstack/nova-cell0-6020-account-create-update-p7x99" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.507606 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d6bf34b-de01-4687-8288-6c652539bbd2-operator-scripts\") pod \"nova-cell0-6020-account-create-update-p7x99\" (UID: \"8d6bf34b-de01-4687-8288-6c652539bbd2\") " pod="openstack/nova-cell0-6020-account-create-update-p7x99" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.508591 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d6bf34b-de01-4687-8288-6c652539bbd2-operator-scripts\") pod \"nova-cell0-6020-account-create-update-p7x99\" (UID: \"8d6bf34b-de01-4687-8288-6c652539bbd2\") " pod="openstack/nova-cell0-6020-account-create-update-p7x99" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.527567 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.531816 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvwwf\" (UniqueName: \"kubernetes.io/projected/8d6bf34b-de01-4687-8288-6c652539bbd2-kube-api-access-lvwwf\") pod \"nova-cell0-6020-account-create-update-p7x99\" (UID: \"8d6bf34b-de01-4687-8288-6c652539bbd2\") " pod="openstack/nova-cell0-6020-account-create-update-p7x99" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.552004 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-smbvn" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.581857 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6020-account-create-update-p7x99" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.608802 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkzks\" (UniqueName: \"kubernetes.io/projected/8dc9fc14-27f3-42dd-b037-39b461aa19f1-kube-api-access-dkzks\") pod \"nova-cell1-3d52-account-create-update-9v8gv\" (UID: \"8dc9fc14-27f3-42dd-b037-39b461aa19f1\") " pod="openstack/nova-cell1-3d52-account-create-update-9v8gv" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.608999 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dc9fc14-27f3-42dd-b037-39b461aa19f1-operator-scripts\") pod \"nova-cell1-3d52-account-create-update-9v8gv\" (UID: \"8dc9fc14-27f3-42dd-b037-39b461aa19f1\") " pod="openstack/nova-cell1-3d52-account-create-update-9v8gv" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.610512 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dc9fc14-27f3-42dd-b037-39b461aa19f1-operator-scripts\") pod \"nova-cell1-3d52-account-create-update-9v8gv\" (UID: \"8dc9fc14-27f3-42dd-b037-39b461aa19f1\") " pod="openstack/nova-cell1-3d52-account-create-update-9v8gv" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.628593 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-4zv8f"] Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.641730 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkzks\" (UniqueName: \"kubernetes.io/projected/8dc9fc14-27f3-42dd-b037-39b461aa19f1-kube-api-access-dkzks\") pod \"nova-cell1-3d52-account-create-update-9v8gv\" (UID: \"8dc9fc14-27f3-42dd-b037-39b461aa19f1\") " pod="openstack/nova-cell1-3d52-account-create-update-9v8gv" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.736431 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3d52-account-create-update-9v8gv" Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.898521 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ncrjl"] Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.935363 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2330c3f1-da78-4cc1-a16a-856037a1f395","Type":"ContainerStarted","Data":"ce6ad4dfc84902ecde201ab5e4b9f9a07b757d0f283113f14f9047f1616e7954"} Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.938811 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-4zv8f" event={"ID":"30de17e6-b0bd-4549-b794-052a3b6c9d84","Type":"ContainerStarted","Data":"218fffcac2e531c6b78301cba8f58bf2fca9102a2a433016ae486b9c7f9c720b"} Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.955577 4914 generic.go:334] "Generic (PLEG): container finished" podID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerID="9e3abfcfbbdbd34795207b9cc8c3ffaf62281e5ff38250ee448c4dff8bf30921" exitCode=0 Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.955613 4914 generic.go:334] "Generic (PLEG): container finished" podID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerID="6492d58038e86047e5dd3722ff0072e71f1fdad1a444011aebceab55538502d0" exitCode=2 Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.955626 4914 generic.go:334] "Generic (PLEG): container finished" podID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerID="8d8a73bc1f588632f4ea3372a217dc6d32b2a3802c7fba9764f7ad75dd5319a8" exitCode=0 Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.955647 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0ab6ceb-e872-4ca0-b532-80b329aa647f","Type":"ContainerDied","Data":"9e3abfcfbbdbd34795207b9cc8c3ffaf62281e5ff38250ee448c4dff8bf30921"} Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.955677 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0ab6ceb-e872-4ca0-b532-80b329aa647f","Type":"ContainerDied","Data":"6492d58038e86047e5dd3722ff0072e71f1fdad1a444011aebceab55538502d0"} Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.955690 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0ab6ceb-e872-4ca0-b532-80b329aa647f","Type":"ContainerDied","Data":"8d8a73bc1f588632f4ea3372a217dc6d32b2a3802c7fba9764f7ad75dd5319a8"} Jan 27 14:06:17 crc kubenswrapper[4914]: I0127 14:06:17.964404 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9dea-account-create-update-phmgm"] Jan 27 14:06:18 crc kubenswrapper[4914]: W0127 14:06:18.011353 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f745b4c_d50b_4e67_902d_ea60fedda7dc.slice/crio-78b87468bc42d931f2ca512bb3e58a53592cf452d7ca7779429b3cc3501a75ee WatchSource:0}: Error finding container 78b87468bc42d931f2ca512bb3e58a53592cf452d7ca7779429b3cc3501a75ee: Status 404 returned error can't find the container with id 78b87468bc42d931f2ca512bb3e58a53592cf452d7ca7779429b3cc3501a75ee Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.156932 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.304946 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd59938-4cf8-4632-8b1c-237cf981fd5f" path="/var/lib/kubelet/pods/1dd59938-4cf8-4632-8b1c-237cf981fd5f/volumes" Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.305599 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4" path="/var/lib/kubelet/pods/ece67619-8ef7-4c3f-ba5a-36fcc1f05fe4/volumes" Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.306780 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-smbvn"] Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.349247 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6020-account-create-update-p7x99"] Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.433702 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.497100 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3d52-account-create-update-9v8gv"] Jan 27 14:06:18 crc kubenswrapper[4914]: W0127 14:06:18.506572 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dc9fc14_27f3_42dd_b037_39b461aa19f1.slice/crio-35d7dfe613dc37bf216110afed3e51f31749ff3ede83e8cfd640666ff6c9a09c WatchSource:0}: Error finding container 35d7dfe613dc37bf216110afed3e51f31749ff3ede83e8cfd640666ff6c9a09c: Status 404 returned error can't find the container with id 35d7dfe613dc37bf216110afed3e51f31749ff3ede83e8cfd640666ff6c9a09c Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.970104 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ff464548-5e9c-4d46-a547-7d0cdd949883","Type":"ContainerStarted","Data":"cad688fd583e2dc5f915096d2bf66d1fa860092e9baaa8477197cd256f995392"} Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.972219 4914 generic.go:334] "Generic (PLEG): container finished" podID="9f745b4c-d50b-4e67-902d-ea60fedda7dc" containerID="c4a240aee59a7c7c3e19351e4a56d3db9f319196b8c7e37c539352c8ced4ebb9" exitCode=0 Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.972276 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9dea-account-create-update-phmgm" event={"ID":"9f745b4c-d50b-4e67-902d-ea60fedda7dc","Type":"ContainerDied","Data":"c4a240aee59a7c7c3e19351e4a56d3db9f319196b8c7e37c539352c8ced4ebb9"} Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.972293 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9dea-account-create-update-phmgm" event={"ID":"9f745b4c-d50b-4e67-902d-ea60fedda7dc","Type":"ContainerStarted","Data":"78b87468bc42d931f2ca512bb3e58a53592cf452d7ca7779429b3cc3501a75ee"} Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.974226 4914 generic.go:334] "Generic (PLEG): container finished" podID="d8921e83-a3ec-4c05-9501-47e07d28a3ac" containerID="506390ca40c1346c82018fa458facd197e0d38af2a54f4ecc82d3415332459b1" exitCode=0 Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.974303 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ncrjl" event={"ID":"d8921e83-a3ec-4c05-9501-47e07d28a3ac","Type":"ContainerDied","Data":"506390ca40c1346c82018fa458facd197e0d38af2a54f4ecc82d3415332459b1"} Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.974439 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ncrjl" event={"ID":"d8921e83-a3ec-4c05-9501-47e07d28a3ac","Type":"ContainerStarted","Data":"75bbf8659964ec91e82fcc68e11a29de281fef6951b6559d707d3b12902ed5a2"} Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.976301 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2330c3f1-da78-4cc1-a16a-856037a1f395","Type":"ContainerStarted","Data":"9e440d6a203f3e3a361bab2b9ce631b34ab0aa9cd5da8e43a185f74e13dc2ca4"} Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.979383 4914 generic.go:334] "Generic (PLEG): container finished" podID="8d6bf34b-de01-4687-8288-6c652539bbd2" containerID="bb3a74d6b4bb7b0425c6522a662de5b37d55c95b0ee9de72aa1e4becfab41a82" exitCode=0 Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.979467 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6020-account-create-update-p7x99" event={"ID":"8d6bf34b-de01-4687-8288-6c652539bbd2","Type":"ContainerDied","Data":"bb3a74d6b4bb7b0425c6522a662de5b37d55c95b0ee9de72aa1e4becfab41a82"} Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.979509 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6020-account-create-update-p7x99" event={"ID":"8d6bf34b-de01-4687-8288-6c652539bbd2","Type":"ContainerStarted","Data":"cd546c322bfcf4f764e1ecad6d03e314342d4bec3c1bf9f1b2fad7ccf997017a"} Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.996149 4914 generic.go:334] "Generic (PLEG): container finished" podID="30de17e6-b0bd-4549-b794-052a3b6c9d84" containerID="b284019a99a6f8755cd21a5c4044ffecd831e19c7b7182c85436ca498b081813" exitCode=0 Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.996207 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-4zv8f" event={"ID":"30de17e6-b0bd-4549-b794-052a3b6c9d84","Type":"ContainerDied","Data":"b284019a99a6f8755cd21a5c4044ffecd831e19c7b7182c85436ca498b081813"} Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.998152 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3d52-account-create-update-9v8gv" event={"ID":"8dc9fc14-27f3-42dd-b037-39b461aa19f1","Type":"ContainerStarted","Data":"6ec8deadd6a92a40eb61f5e84ff91f0b61711bd06fd87c23520319b51731217b"} Jan 27 14:06:18 crc kubenswrapper[4914]: I0127 14:06:18.998185 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3d52-account-create-update-9v8gv" event={"ID":"8dc9fc14-27f3-42dd-b037-39b461aa19f1","Type":"ContainerStarted","Data":"35d7dfe613dc37bf216110afed3e51f31749ff3ede83e8cfd640666ff6c9a09c"} Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.008393 4914 generic.go:334] "Generic (PLEG): container finished" podID="a9c87355-3782-4f33-8e73-14293d16499d" containerID="53b4f8943c226a7eb989900d3f6a4344f58eb74ddc02eb261fec8885ba42dc09" exitCode=0 Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.008454 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-smbvn" event={"ID":"a9c87355-3782-4f33-8e73-14293d16499d","Type":"ContainerDied","Data":"53b4f8943c226a7eb989900d3f6a4344f58eb74ddc02eb261fec8885ba42dc09"} Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.008500 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-smbvn" event={"ID":"a9c87355-3782-4f33-8e73-14293d16499d","Type":"ContainerStarted","Data":"a6bd222ad808ca56623fe44de397463d7c2ed7243a2bc9b790db98be73fff3de"} Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.073730 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.083475 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-3d52-account-create-update-9v8gv" podStartSLOduration=2.083461175 podStartE2EDuration="2.083461175s" podCreationTimestamp="2026-01-27 14:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:19.073471251 +0000 UTC m=+1337.385821336" watchObservedRunningTime="2026-01-27 14:06:19.083461175 +0000 UTC m=+1337.395811260" Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.543300 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.592120 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-scripts\") pod \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.592191 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6ceb-e872-4ca0-b532-80b329aa647f-log-httpd\") pod \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.592354 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tb4m\" (UniqueName: \"kubernetes.io/projected/a0ab6ceb-e872-4ca0-b532-80b329aa647f-kube-api-access-2tb4m\") pod \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.592389 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-config-data\") pod \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.592468 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6ceb-e872-4ca0-b532-80b329aa647f-run-httpd\") pod \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.592520 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-sg-core-conf-yaml\") pod \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.592571 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-combined-ca-bundle\") pod \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\" (UID: \"a0ab6ceb-e872-4ca0-b532-80b329aa647f\") " Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.594248 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0ab6ceb-e872-4ca0-b532-80b329aa647f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a0ab6ceb-e872-4ca0-b532-80b329aa647f" (UID: "a0ab6ceb-e872-4ca0-b532-80b329aa647f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.594535 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0ab6ceb-e872-4ca0-b532-80b329aa647f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a0ab6ceb-e872-4ca0-b532-80b329aa647f" (UID: "a0ab6ceb-e872-4ca0-b532-80b329aa647f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.610073 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0ab6ceb-e872-4ca0-b532-80b329aa647f-kube-api-access-2tb4m" (OuterVolumeSpecName: "kube-api-access-2tb4m") pod "a0ab6ceb-e872-4ca0-b532-80b329aa647f" (UID: "a0ab6ceb-e872-4ca0-b532-80b329aa647f"). InnerVolumeSpecName "kube-api-access-2tb4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.626368 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-scripts" (OuterVolumeSpecName: "scripts") pod "a0ab6ceb-e872-4ca0-b532-80b329aa647f" (UID: "a0ab6ceb-e872-4ca0-b532-80b329aa647f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.672173 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a0ab6ceb-e872-4ca0-b532-80b329aa647f" (UID: "a0ab6ceb-e872-4ca0-b532-80b329aa647f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.694765 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tb4m\" (UniqueName: \"kubernetes.io/projected/a0ab6ceb-e872-4ca0-b532-80b329aa647f-kube-api-access-2tb4m\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.694792 4914 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6ceb-e872-4ca0-b532-80b329aa647f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.694804 4914 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.694816 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.694824 4914 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6ceb-e872-4ca0-b532-80b329aa647f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.699987 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0ab6ceb-e872-4ca0-b532-80b329aa647f" (UID: "a0ab6ceb-e872-4ca0-b532-80b329aa647f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.738516 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-config-data" (OuterVolumeSpecName: "config-data") pod "a0ab6ceb-e872-4ca0-b532-80b329aa647f" (UID: "a0ab6ceb-e872-4ca0-b532-80b329aa647f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.798135 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:19 crc kubenswrapper[4914]: I0127 14:06:19.798170 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0ab6ceb-e872-4ca0-b532-80b329aa647f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.018801 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2330c3f1-da78-4cc1-a16a-856037a1f395","Type":"ContainerStarted","Data":"fcaad43cce9e660baf593baa77dd28d983fa51563c76eab0b3f46ec1a04554ef"} Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.020666 4914 generic.go:334] "Generic (PLEG): container finished" podID="8dc9fc14-27f3-42dd-b037-39b461aa19f1" containerID="6ec8deadd6a92a40eb61f5e84ff91f0b61711bd06fd87c23520319b51731217b" exitCode=0 Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.020720 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3d52-account-create-update-9v8gv" event={"ID":"8dc9fc14-27f3-42dd-b037-39b461aa19f1","Type":"ContainerDied","Data":"6ec8deadd6a92a40eb61f5e84ff91f0b61711bd06fd87c23520319b51731217b"} Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.022879 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ff464548-5e9c-4d46-a547-7d0cdd949883","Type":"ContainerStarted","Data":"49842300c81f7f1f01a576779e67215a86e15b03ac26e86f11cbb6cf85e75816"} Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.022925 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ff464548-5e9c-4d46-a547-7d0cdd949883","Type":"ContainerStarted","Data":"249be1062869525a65da42205c2568ac56187f23b33e6d007e1de8c96c8402c3"} Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.027017 4914 generic.go:334] "Generic (PLEG): container finished" podID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerID="f509a2ccb63df203c72cfc2b297f74dd625b3fb6b2d58c5433422f0f6ebbe81e" exitCode=0 Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.027144 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0ab6ceb-e872-4ca0-b532-80b329aa647f","Type":"ContainerDied","Data":"f509a2ccb63df203c72cfc2b297f74dd625b3fb6b2d58c5433422f0f6ebbe81e"} Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.027183 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0ab6ceb-e872-4ca0-b532-80b329aa647f","Type":"ContainerDied","Data":"b78da211e6bd6f8da56ae7260c177c76c1c783c14ba4009cc2b1b2b551c765a6"} Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.027205 4914 scope.go:117] "RemoveContainer" containerID="9e3abfcfbbdbd34795207b9cc8c3ffaf62281e5ff38250ee448c4dff8bf30921" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.027133 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.053251 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.053225822 podStartE2EDuration="4.053225822s" podCreationTimestamp="2026-01-27 14:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:20.045226753 +0000 UTC m=+1338.357576868" watchObservedRunningTime="2026-01-27 14:06:20.053225822 +0000 UTC m=+1338.365575927" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.087607 4914 scope.go:117] "RemoveContainer" containerID="6492d58038e86047e5dd3722ff0072e71f1fdad1a444011aebceab55538502d0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.114754 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.114718207 podStartE2EDuration="4.114718207s" podCreationTimestamp="2026-01-27 14:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:20.099599223 +0000 UTC m=+1338.411949308" watchObservedRunningTime="2026-01-27 14:06:20.114718207 +0000 UTC m=+1338.427068292" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.139087 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.171211 4914 scope.go:117] "RemoveContainer" containerID="8d8a73bc1f588632f4ea3372a217dc6d32b2a3802c7fba9764f7ad75dd5319a8" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.179665 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.194167 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:20 crc kubenswrapper[4914]: E0127 14:06:20.194642 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerName="ceilometer-central-agent" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.194656 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerName="ceilometer-central-agent" Jan 27 14:06:20 crc kubenswrapper[4914]: E0127 14:06:20.194695 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerName="ceilometer-notification-agent" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.194701 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerName="ceilometer-notification-agent" Jan 27 14:06:20 crc kubenswrapper[4914]: E0127 14:06:20.194730 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerName="sg-core" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.194736 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerName="sg-core" Jan 27 14:06:20 crc kubenswrapper[4914]: E0127 14:06:20.194771 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerName="proxy-httpd" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.194777 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerName="proxy-httpd" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.195160 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerName="sg-core" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.195178 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerName="proxy-httpd" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.195200 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerName="ceilometer-notification-agent" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.195219 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" containerName="ceilometer-central-agent" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.200786 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.204557 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.204767 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.210872 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.214216 4914 scope.go:117] "RemoveContainer" containerID="f509a2ccb63df203c72cfc2b297f74dd625b3fb6b2d58c5433422f0f6ebbe81e" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.266427 4914 scope.go:117] "RemoveContainer" containerID="9e3abfcfbbdbd34795207b9cc8c3ffaf62281e5ff38250ee448c4dff8bf30921" Jan 27 14:06:20 crc kubenswrapper[4914]: E0127 14:06:20.267064 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e3abfcfbbdbd34795207b9cc8c3ffaf62281e5ff38250ee448c4dff8bf30921\": container with ID starting with 9e3abfcfbbdbd34795207b9cc8c3ffaf62281e5ff38250ee448c4dff8bf30921 not found: ID does not exist" containerID="9e3abfcfbbdbd34795207b9cc8c3ffaf62281e5ff38250ee448c4dff8bf30921" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.267092 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e3abfcfbbdbd34795207b9cc8c3ffaf62281e5ff38250ee448c4dff8bf30921"} err="failed to get container status \"9e3abfcfbbdbd34795207b9cc8c3ffaf62281e5ff38250ee448c4dff8bf30921\": rpc error: code = NotFound desc = could not find container \"9e3abfcfbbdbd34795207b9cc8c3ffaf62281e5ff38250ee448c4dff8bf30921\": container with ID starting with 9e3abfcfbbdbd34795207b9cc8c3ffaf62281e5ff38250ee448c4dff8bf30921 not found: ID does not exist" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.267113 4914 scope.go:117] "RemoveContainer" containerID="6492d58038e86047e5dd3722ff0072e71f1fdad1a444011aebceab55538502d0" Jan 27 14:06:20 crc kubenswrapper[4914]: E0127 14:06:20.267376 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6492d58038e86047e5dd3722ff0072e71f1fdad1a444011aebceab55538502d0\": container with ID starting with 6492d58038e86047e5dd3722ff0072e71f1fdad1a444011aebceab55538502d0 not found: ID does not exist" containerID="6492d58038e86047e5dd3722ff0072e71f1fdad1a444011aebceab55538502d0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.267400 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6492d58038e86047e5dd3722ff0072e71f1fdad1a444011aebceab55538502d0"} err="failed to get container status \"6492d58038e86047e5dd3722ff0072e71f1fdad1a444011aebceab55538502d0\": rpc error: code = NotFound desc = could not find container \"6492d58038e86047e5dd3722ff0072e71f1fdad1a444011aebceab55538502d0\": container with ID starting with 6492d58038e86047e5dd3722ff0072e71f1fdad1a444011aebceab55538502d0 not found: ID does not exist" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.267432 4914 scope.go:117] "RemoveContainer" containerID="8d8a73bc1f588632f4ea3372a217dc6d32b2a3802c7fba9764f7ad75dd5319a8" Jan 27 14:06:20 crc kubenswrapper[4914]: E0127 14:06:20.267711 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d8a73bc1f588632f4ea3372a217dc6d32b2a3802c7fba9764f7ad75dd5319a8\": container with ID starting with 8d8a73bc1f588632f4ea3372a217dc6d32b2a3802c7fba9764f7ad75dd5319a8 not found: ID does not exist" containerID="8d8a73bc1f588632f4ea3372a217dc6d32b2a3802c7fba9764f7ad75dd5319a8" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.267737 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8a73bc1f588632f4ea3372a217dc6d32b2a3802c7fba9764f7ad75dd5319a8"} err="failed to get container status \"8d8a73bc1f588632f4ea3372a217dc6d32b2a3802c7fba9764f7ad75dd5319a8\": rpc error: code = NotFound desc = could not find container \"8d8a73bc1f588632f4ea3372a217dc6d32b2a3802c7fba9764f7ad75dd5319a8\": container with ID starting with 8d8a73bc1f588632f4ea3372a217dc6d32b2a3802c7fba9764f7ad75dd5319a8 not found: ID does not exist" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.267753 4914 scope.go:117] "RemoveContainer" containerID="f509a2ccb63df203c72cfc2b297f74dd625b3fb6b2d58c5433422f0f6ebbe81e" Jan 27 14:06:20 crc kubenswrapper[4914]: E0127 14:06:20.268107 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f509a2ccb63df203c72cfc2b297f74dd625b3fb6b2d58c5433422f0f6ebbe81e\": container with ID starting with f509a2ccb63df203c72cfc2b297f74dd625b3fb6b2d58c5433422f0f6ebbe81e not found: ID does not exist" containerID="f509a2ccb63df203c72cfc2b297f74dd625b3fb6b2d58c5433422f0f6ebbe81e" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.268126 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f509a2ccb63df203c72cfc2b297f74dd625b3fb6b2d58c5433422f0f6ebbe81e"} err="failed to get container status \"f509a2ccb63df203c72cfc2b297f74dd625b3fb6b2d58c5433422f0f6ebbe81e\": rpc error: code = NotFound desc = could not find container \"f509a2ccb63df203c72cfc2b297f74dd625b3fb6b2d58c5433422f0f6ebbe81e\": container with ID starting with f509a2ccb63df203c72cfc2b297f74dd625b3fb6b2d58c5433422f0f6ebbe81e not found: ID does not exist" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.333469 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0ab6ceb-e872-4ca0-b532-80b329aa647f" path="/var/lib/kubelet/pods/a0ab6ceb-e872-4ca0-b532-80b329aa647f/volumes" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.417549 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.417980 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcd5b\" (UniqueName: \"kubernetes.io/projected/aa3cfa4b-174e-4958-8bd6-560f1f990c68-kube-api-access-zcd5b\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.418183 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-scripts\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.418251 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-config-data\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.418328 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3cfa4b-174e-4958-8bd6-560f1f990c68-log-httpd\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.418420 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.418463 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3cfa4b-174e-4958-8bd6-560f1f990c68-run-httpd\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.519774 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3cfa4b-174e-4958-8bd6-560f1f990c68-log-httpd\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.519885 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.519923 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3cfa4b-174e-4958-8bd6-560f1f990c68-run-httpd\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.519943 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.519976 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcd5b\" (UniqueName: \"kubernetes.io/projected/aa3cfa4b-174e-4958-8bd6-560f1f990c68-kube-api-access-zcd5b\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.520054 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-scripts\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.520090 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-config-data\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.521120 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3cfa4b-174e-4958-8bd6-560f1f990c68-log-httpd\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.529304 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.529341 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-config-data\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.529802 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.530295 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3cfa4b-174e-4958-8bd6-560f1f990c68-run-httpd\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.531486 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-scripts\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.546287 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcd5b\" (UniqueName: \"kubernetes.io/projected/aa3cfa4b-174e-4958-8bd6-560f1f990c68-kube-api-access-zcd5b\") pod \"ceilometer-0\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.615331 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ncrjl" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.724665 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8921e83-a3ec-4c05-9501-47e07d28a3ac-operator-scripts\") pod \"d8921e83-a3ec-4c05-9501-47e07d28a3ac\" (UID: \"d8921e83-a3ec-4c05-9501-47e07d28a3ac\") " Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.725039 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w274p\" (UniqueName: \"kubernetes.io/projected/d8921e83-a3ec-4c05-9501-47e07d28a3ac-kube-api-access-w274p\") pod \"d8921e83-a3ec-4c05-9501-47e07d28a3ac\" (UID: \"d8921e83-a3ec-4c05-9501-47e07d28a3ac\") " Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.726667 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8921e83-a3ec-4c05-9501-47e07d28a3ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8921e83-a3ec-4c05-9501-47e07d28a3ac" (UID: "d8921e83-a3ec-4c05-9501-47e07d28a3ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.730963 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8921e83-a3ec-4c05-9501-47e07d28a3ac-kube-api-access-w274p" (OuterVolumeSpecName: "kube-api-access-w274p") pod "d8921e83-a3ec-4c05-9501-47e07d28a3ac" (UID: "d8921e83-a3ec-4c05-9501-47e07d28a3ac"). InnerVolumeSpecName "kube-api-access-w274p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.803917 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9dea-account-create-update-phmgm" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.808264 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-smbvn" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.814721 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-4zv8f" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.823908 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.824149 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6020-account-create-update-p7x99" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.827516 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8921e83-a3ec-4c05-9501-47e07d28a3ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.827544 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w274p\" (UniqueName: \"kubernetes.io/projected/d8921e83-a3ec-4c05-9501-47e07d28a3ac-kube-api-access-w274p\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.940446 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvwwf\" (UniqueName: \"kubernetes.io/projected/8d6bf34b-de01-4687-8288-6c652539bbd2-kube-api-access-lvwwf\") pod \"8d6bf34b-de01-4687-8288-6c652539bbd2\" (UID: \"8d6bf34b-de01-4687-8288-6c652539bbd2\") " Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.940537 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwbs6\" (UniqueName: \"kubernetes.io/projected/30de17e6-b0bd-4549-b794-052a3b6c9d84-kube-api-access-gwbs6\") pod \"30de17e6-b0bd-4549-b794-052a3b6c9d84\" (UID: \"30de17e6-b0bd-4549-b794-052a3b6c9d84\") " Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.940656 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnht9\" (UniqueName: \"kubernetes.io/projected/a9c87355-3782-4f33-8e73-14293d16499d-kube-api-access-cnht9\") pod \"a9c87355-3782-4f33-8e73-14293d16499d\" (UID: \"a9c87355-3782-4f33-8e73-14293d16499d\") " Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.940709 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r87n7\" (UniqueName: \"kubernetes.io/projected/9f745b4c-d50b-4e67-902d-ea60fedda7dc-kube-api-access-r87n7\") pod \"9f745b4c-d50b-4e67-902d-ea60fedda7dc\" (UID: \"9f745b4c-d50b-4e67-902d-ea60fedda7dc\") " Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.940738 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30de17e6-b0bd-4549-b794-052a3b6c9d84-operator-scripts\") pod \"30de17e6-b0bd-4549-b794-052a3b6c9d84\" (UID: \"30de17e6-b0bd-4549-b794-052a3b6c9d84\") " Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.940780 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f745b4c-d50b-4e67-902d-ea60fedda7dc-operator-scripts\") pod \"9f745b4c-d50b-4e67-902d-ea60fedda7dc\" (UID: \"9f745b4c-d50b-4e67-902d-ea60fedda7dc\") " Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.940877 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c87355-3782-4f33-8e73-14293d16499d-operator-scripts\") pod \"a9c87355-3782-4f33-8e73-14293d16499d\" (UID: \"a9c87355-3782-4f33-8e73-14293d16499d\") " Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.940949 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d6bf34b-de01-4687-8288-6c652539bbd2-operator-scripts\") pod \"8d6bf34b-de01-4687-8288-6c652539bbd2\" (UID: \"8d6bf34b-de01-4687-8288-6c652539bbd2\") " Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.941566 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30de17e6-b0bd-4549-b794-052a3b6c9d84-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "30de17e6-b0bd-4549-b794-052a3b6c9d84" (UID: "30de17e6-b0bd-4549-b794-052a3b6c9d84"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.941796 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d6bf34b-de01-4687-8288-6c652539bbd2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d6bf34b-de01-4687-8288-6c652539bbd2" (UID: "8d6bf34b-de01-4687-8288-6c652539bbd2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.942219 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f745b4c-d50b-4e67-902d-ea60fedda7dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9f745b4c-d50b-4e67-902d-ea60fedda7dc" (UID: "9f745b4c-d50b-4e67-902d-ea60fedda7dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.942328 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c87355-3782-4f33-8e73-14293d16499d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9c87355-3782-4f33-8e73-14293d16499d" (UID: "a9c87355-3782-4f33-8e73-14293d16499d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.944363 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30de17e6-b0bd-4549-b794-052a3b6c9d84-kube-api-access-gwbs6" (OuterVolumeSpecName: "kube-api-access-gwbs6") pod "30de17e6-b0bd-4549-b794-052a3b6c9d84" (UID: "30de17e6-b0bd-4549-b794-052a3b6c9d84"). InnerVolumeSpecName "kube-api-access-gwbs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.945329 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c87355-3782-4f33-8e73-14293d16499d-kube-api-access-cnht9" (OuterVolumeSpecName: "kube-api-access-cnht9") pod "a9c87355-3782-4f33-8e73-14293d16499d" (UID: "a9c87355-3782-4f33-8e73-14293d16499d"). InnerVolumeSpecName "kube-api-access-cnht9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.946943 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f745b4c-d50b-4e67-902d-ea60fedda7dc-kube-api-access-r87n7" (OuterVolumeSpecName: "kube-api-access-r87n7") pod "9f745b4c-d50b-4e67-902d-ea60fedda7dc" (UID: "9f745b4c-d50b-4e67-902d-ea60fedda7dc"). InnerVolumeSpecName "kube-api-access-r87n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:20 crc kubenswrapper[4914]: I0127 14:06:20.948091 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6bf34b-de01-4687-8288-6c652539bbd2-kube-api-access-lvwwf" (OuterVolumeSpecName: "kube-api-access-lvwwf") pod "8d6bf34b-de01-4687-8288-6c652539bbd2" (UID: "8d6bf34b-de01-4687-8288-6c652539bbd2"). InnerVolumeSpecName "kube-api-access-lvwwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.037255 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9dea-account-create-update-phmgm" event={"ID":"9f745b4c-d50b-4e67-902d-ea60fedda7dc","Type":"ContainerDied","Data":"78b87468bc42d931f2ca512bb3e58a53592cf452d7ca7779429b3cc3501a75ee"} Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.037584 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78b87468bc42d931f2ca512bb3e58a53592cf452d7ca7779429b3cc3501a75ee" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.037636 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9dea-account-create-update-phmgm" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.040914 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ncrjl" event={"ID":"d8921e83-a3ec-4c05-9501-47e07d28a3ac","Type":"ContainerDied","Data":"75bbf8659964ec91e82fcc68e11a29de281fef6951b6559d707d3b12902ed5a2"} Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.040955 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75bbf8659964ec91e82fcc68e11a29de281fef6951b6559d707d3b12902ed5a2" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.041016 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ncrjl" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.044409 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnht9\" (UniqueName: \"kubernetes.io/projected/a9c87355-3782-4f33-8e73-14293d16499d-kube-api-access-cnht9\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.044449 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r87n7\" (UniqueName: \"kubernetes.io/projected/9f745b4c-d50b-4e67-902d-ea60fedda7dc-kube-api-access-r87n7\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.044464 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30de17e6-b0bd-4549-b794-052a3b6c9d84-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.044475 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f745b4c-d50b-4e67-902d-ea60fedda7dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.044487 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c87355-3782-4f33-8e73-14293d16499d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.044500 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d6bf34b-de01-4687-8288-6c652539bbd2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.044512 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvwwf\" (UniqueName: \"kubernetes.io/projected/8d6bf34b-de01-4687-8288-6c652539bbd2-kube-api-access-lvwwf\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.044527 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwbs6\" (UniqueName: \"kubernetes.io/projected/30de17e6-b0bd-4549-b794-052a3b6c9d84-kube-api-access-gwbs6\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.048873 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6020-account-create-update-p7x99" event={"ID":"8d6bf34b-de01-4687-8288-6c652539bbd2","Type":"ContainerDied","Data":"cd546c322bfcf4f764e1ecad6d03e314342d4bec3c1bf9f1b2fad7ccf997017a"} Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.048916 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd546c322bfcf4f764e1ecad6d03e314342d4bec3c1bf9f1b2fad7ccf997017a" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.048984 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6020-account-create-update-p7x99" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.054514 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-4zv8f" event={"ID":"30de17e6-b0bd-4549-b794-052a3b6c9d84","Type":"ContainerDied","Data":"218fffcac2e531c6b78301cba8f58bf2fca9102a2a433016ae486b9c7f9c720b"} Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.054556 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="218fffcac2e531c6b78301cba8f58bf2fca9102a2a433016ae486b9c7f9c720b" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.054652 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-4zv8f" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.057796 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-smbvn" event={"ID":"a9c87355-3782-4f33-8e73-14293d16499d","Type":"ContainerDied","Data":"a6bd222ad808ca56623fe44de397463d7c2ed7243a2bc9b790db98be73fff3de"} Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.057861 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6bd222ad808ca56623fe44de397463d7c2ed7243a2bc9b790db98be73fff3de" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.057916 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-smbvn" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.297124 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.353083 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.384501 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3d52-account-create-update-9v8gv" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.553944 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dc9fc14-27f3-42dd-b037-39b461aa19f1-operator-scripts\") pod \"8dc9fc14-27f3-42dd-b037-39b461aa19f1\" (UID: \"8dc9fc14-27f3-42dd-b037-39b461aa19f1\") " Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.554132 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkzks\" (UniqueName: \"kubernetes.io/projected/8dc9fc14-27f3-42dd-b037-39b461aa19f1-kube-api-access-dkzks\") pod \"8dc9fc14-27f3-42dd-b037-39b461aa19f1\" (UID: \"8dc9fc14-27f3-42dd-b037-39b461aa19f1\") " Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.555723 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dc9fc14-27f3-42dd-b037-39b461aa19f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8dc9fc14-27f3-42dd-b037-39b461aa19f1" (UID: "8dc9fc14-27f3-42dd-b037-39b461aa19f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.560405 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc9fc14-27f3-42dd-b037-39b461aa19f1-kube-api-access-dkzks" (OuterVolumeSpecName: "kube-api-access-dkzks") pod "8dc9fc14-27f3-42dd-b037-39b461aa19f1" (UID: "8dc9fc14-27f3-42dd-b037-39b461aa19f1"). InnerVolumeSpecName "kube-api-access-dkzks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.655996 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkzks\" (UniqueName: \"kubernetes.io/projected/8dc9fc14-27f3-42dd-b037-39b461aa19f1-kube-api-access-dkzks\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:21 crc kubenswrapper[4914]: I0127 14:06:21.656043 4914 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dc9fc14-27f3-42dd-b037-39b461aa19f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.069491 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3cfa4b-174e-4958-8bd6-560f1f990c68","Type":"ContainerStarted","Data":"d4dd9615570f7ee70142aa7d7ddc40ddd42cffab8c1bd372495b682cfd2d0241"} Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.071522 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3d52-account-create-update-9v8gv" event={"ID":"8dc9fc14-27f3-42dd-b037-39b461aa19f1","Type":"ContainerDied","Data":"35d7dfe613dc37bf216110afed3e51f31749ff3ede83e8cfd640666ff6c9a09c"} Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.071564 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35d7dfe613dc37bf216110afed3e51f31749ff3ede83e8cfd640666ff6c9a09c" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.071642 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3d52-account-create-update-9v8gv" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.439504 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xbz4b"] Jan 27 14:06:22 crc kubenswrapper[4914]: E0127 14:06:22.440060 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30de17e6-b0bd-4549-b794-052a3b6c9d84" containerName="mariadb-database-create" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.440080 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="30de17e6-b0bd-4549-b794-052a3b6c9d84" containerName="mariadb-database-create" Jan 27 14:06:22 crc kubenswrapper[4914]: E0127 14:06:22.440095 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc9fc14-27f3-42dd-b037-39b461aa19f1" containerName="mariadb-account-create-update" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.440106 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc9fc14-27f3-42dd-b037-39b461aa19f1" containerName="mariadb-account-create-update" Jan 27 14:06:22 crc kubenswrapper[4914]: E0127 14:06:22.440119 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8921e83-a3ec-4c05-9501-47e07d28a3ac" containerName="mariadb-database-create" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.440124 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8921e83-a3ec-4c05-9501-47e07d28a3ac" containerName="mariadb-database-create" Jan 27 14:06:22 crc kubenswrapper[4914]: E0127 14:06:22.440157 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c87355-3782-4f33-8e73-14293d16499d" containerName="mariadb-database-create" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.440164 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c87355-3782-4f33-8e73-14293d16499d" containerName="mariadb-database-create" Jan 27 14:06:22 crc kubenswrapper[4914]: E0127 14:06:22.440172 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6bf34b-de01-4687-8288-6c652539bbd2" containerName="mariadb-account-create-update" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.440178 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6bf34b-de01-4687-8288-6c652539bbd2" containerName="mariadb-account-create-update" Jan 27 14:06:22 crc kubenswrapper[4914]: E0127 14:06:22.440192 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f745b4c-d50b-4e67-902d-ea60fedda7dc" containerName="mariadb-account-create-update" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.440198 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f745b4c-d50b-4e67-902d-ea60fedda7dc" containerName="mariadb-account-create-update" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.440380 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="30de17e6-b0bd-4549-b794-052a3b6c9d84" containerName="mariadb-database-create" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.440392 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc9fc14-27f3-42dd-b037-39b461aa19f1" containerName="mariadb-account-create-update" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.440405 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c87355-3782-4f33-8e73-14293d16499d" containerName="mariadb-database-create" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.440413 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f745b4c-d50b-4e67-902d-ea60fedda7dc" containerName="mariadb-account-create-update" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.440424 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8921e83-a3ec-4c05-9501-47e07d28a3ac" containerName="mariadb-database-create" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.440435 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d6bf34b-de01-4687-8288-6c652539bbd2" containerName="mariadb-account-create-update" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.441155 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xbz4b" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.444751 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.445590 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-qfkwf" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.446064 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.448848 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xbz4b"] Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.499341 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-scripts\") pod \"nova-cell0-conductor-db-sync-xbz4b\" (UID: \"58d81784-ad81-47ce-befb-d2ec09617b1c\") " pod="openstack/nova-cell0-conductor-db-sync-xbz4b" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.499410 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xbz4b\" (UID: \"58d81784-ad81-47ce-befb-d2ec09617b1c\") " pod="openstack/nova-cell0-conductor-db-sync-xbz4b" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.499448 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65qxk\" (UniqueName: \"kubernetes.io/projected/58d81784-ad81-47ce-befb-d2ec09617b1c-kube-api-access-65qxk\") pod \"nova-cell0-conductor-db-sync-xbz4b\" (UID: \"58d81784-ad81-47ce-befb-d2ec09617b1c\") " pod="openstack/nova-cell0-conductor-db-sync-xbz4b" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.499502 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-config-data\") pod \"nova-cell0-conductor-db-sync-xbz4b\" (UID: \"58d81784-ad81-47ce-befb-d2ec09617b1c\") " pod="openstack/nova-cell0-conductor-db-sync-xbz4b" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.541508 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-86dc4fcb7d-k54hq"] Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.543703 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.566679 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7898f695d7-6lw8w"] Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.568323 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.583897 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86dc4fcb7d-k54hq"] Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.589790 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7898f695d7-6lw8w"] Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.602685 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-scripts\") pod \"nova-cell0-conductor-db-sync-xbz4b\" (UID: \"58d81784-ad81-47ce-befb-d2ec09617b1c\") " pod="openstack/nova-cell0-conductor-db-sync-xbz4b" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.602749 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xbz4b\" (UID: \"58d81784-ad81-47ce-befb-d2ec09617b1c\") " pod="openstack/nova-cell0-conductor-db-sync-xbz4b" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.602796 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65qxk\" (UniqueName: \"kubernetes.io/projected/58d81784-ad81-47ce-befb-d2ec09617b1c-kube-api-access-65qxk\") pod \"nova-cell0-conductor-db-sync-xbz4b\" (UID: \"58d81784-ad81-47ce-befb-d2ec09617b1c\") " pod="openstack/nova-cell0-conductor-db-sync-xbz4b" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.602886 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-config-data\") pod \"nova-cell0-conductor-db-sync-xbz4b\" (UID: \"58d81784-ad81-47ce-befb-d2ec09617b1c\") " pod="openstack/nova-cell0-conductor-db-sync-xbz4b" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.609824 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-config-data\") pod \"nova-cell0-conductor-db-sync-xbz4b\" (UID: \"58d81784-ad81-47ce-befb-d2ec09617b1c\") " pod="openstack/nova-cell0-conductor-db-sync-xbz4b" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.623015 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-scripts\") pod \"nova-cell0-conductor-db-sync-xbz4b\" (UID: \"58d81784-ad81-47ce-befb-d2ec09617b1c\") " pod="openstack/nova-cell0-conductor-db-sync-xbz4b" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.632515 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xbz4b\" (UID: \"58d81784-ad81-47ce-befb-d2ec09617b1c\") " pod="openstack/nova-cell0-conductor-db-sync-xbz4b" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.655977 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65qxk\" (UniqueName: \"kubernetes.io/projected/58d81784-ad81-47ce-befb-d2ec09617b1c-kube-api-access-65qxk\") pod \"nova-cell0-conductor-db-sync-xbz4b\" (UID: \"58d81784-ad81-47ce-befb-d2ec09617b1c\") " pod="openstack/nova-cell0-conductor-db-sync-xbz4b" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.669920 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-859447f896-dzzll"] Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.685116 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-859447f896-dzzll"] Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.685426 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.711551 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-combined-ca-bundle\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.711592 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-config-data\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.711621 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-combined-ca-bundle\") pod \"barbican-keystone-listener-86dc4fcb7d-k54hq\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.711647 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-internal-tls-certs\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.711676 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-combined-ca-bundle\") pod \"barbican-worker-7898f695d7-6lw8w\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.711700 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-config-data-custom\") pod \"barbican-worker-7898f695d7-6lw8w\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.711726 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgfw8\" (UniqueName: \"kubernetes.io/projected/5509ee4a-07b1-4462-993e-ba8e1569651c-kube-api-access-jgfw8\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.711746 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/416da471-329e-45f8-b786-e8841f575f20-logs\") pod \"barbican-keystone-listener-86dc4fcb7d-k54hq\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.711769 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3ca6012-ae4a-45ab-8975-9de943d2f790-logs\") pod \"barbican-worker-7898f695d7-6lw8w\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.711793 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmbrv\" (UniqueName: \"kubernetes.io/projected/416da471-329e-45f8-b786-e8841f575f20-kube-api-access-fmbrv\") pod \"barbican-keystone-listener-86dc4fcb7d-k54hq\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.711815 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd2hg\" (UniqueName: \"kubernetes.io/projected/d3ca6012-ae4a-45ab-8975-9de943d2f790-kube-api-access-fd2hg\") pod \"barbican-worker-7898f695d7-6lw8w\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.711863 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-config-data\") pod \"barbican-keystone-listener-86dc4fcb7d-k54hq\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.711901 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-config-data-custom\") pod \"barbican-keystone-listener-86dc4fcb7d-k54hq\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.711923 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-public-tls-certs\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.711941 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5509ee4a-07b1-4462-993e-ba8e1569651c-logs\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.711986 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-config-data\") pod \"barbican-worker-7898f695d7-6lw8w\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.712061 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-config-data-custom\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.772190 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xbz4b" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813004 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-config-data-custom\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813283 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-combined-ca-bundle\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813299 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-config-data\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813323 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-combined-ca-bundle\") pod \"barbican-keystone-listener-86dc4fcb7d-k54hq\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813354 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-internal-tls-certs\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813383 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-combined-ca-bundle\") pod \"barbican-worker-7898f695d7-6lw8w\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813406 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-config-data-custom\") pod \"barbican-worker-7898f695d7-6lw8w\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813449 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgfw8\" (UniqueName: \"kubernetes.io/projected/5509ee4a-07b1-4462-993e-ba8e1569651c-kube-api-access-jgfw8\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813470 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/416da471-329e-45f8-b786-e8841f575f20-logs\") pod \"barbican-keystone-listener-86dc4fcb7d-k54hq\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813494 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3ca6012-ae4a-45ab-8975-9de943d2f790-logs\") pod \"barbican-worker-7898f695d7-6lw8w\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813513 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmbrv\" (UniqueName: \"kubernetes.io/projected/416da471-329e-45f8-b786-e8841f575f20-kube-api-access-fmbrv\") pod \"barbican-keystone-listener-86dc4fcb7d-k54hq\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813533 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd2hg\" (UniqueName: \"kubernetes.io/projected/d3ca6012-ae4a-45ab-8975-9de943d2f790-kube-api-access-fd2hg\") pod \"barbican-worker-7898f695d7-6lw8w\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813552 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-config-data\") pod \"barbican-keystone-listener-86dc4fcb7d-k54hq\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813578 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-config-data-custom\") pod \"barbican-keystone-listener-86dc4fcb7d-k54hq\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813596 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-public-tls-certs\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813610 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5509ee4a-07b1-4462-993e-ba8e1569651c-logs\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.813632 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-config-data\") pod \"barbican-worker-7898f695d7-6lw8w\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.822742 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-combined-ca-bundle\") pod \"barbican-keystone-listener-86dc4fcb7d-k54hq\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.823064 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-config-data\") pod \"barbican-worker-7898f695d7-6lw8w\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.823675 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/416da471-329e-45f8-b786-e8841f575f20-logs\") pod \"barbican-keystone-listener-86dc4fcb7d-k54hq\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.823844 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-combined-ca-bundle\") pod \"barbican-worker-7898f695d7-6lw8w\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.824040 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3ca6012-ae4a-45ab-8975-9de943d2f790-logs\") pod \"barbican-worker-7898f695d7-6lw8w\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.824600 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5509ee4a-07b1-4462-993e-ba8e1569651c-logs\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.825300 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-config-data-custom\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.827672 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-config-data-custom\") pod \"barbican-keystone-listener-86dc4fcb7d-k54hq\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.829683 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-combined-ca-bundle\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.829871 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-config-data\") pod \"barbican-keystone-listener-86dc4fcb7d-k54hq\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.831138 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-internal-tls-certs\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.838308 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgfw8\" (UniqueName: \"kubernetes.io/projected/5509ee4a-07b1-4462-993e-ba8e1569651c-kube-api-access-jgfw8\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.838451 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-config-data-custom\") pod \"barbican-worker-7898f695d7-6lw8w\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.840970 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-config-data\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.856184 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-public-tls-certs\") pod \"barbican-api-859447f896-dzzll\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.860392 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd2hg\" (UniqueName: \"kubernetes.io/projected/d3ca6012-ae4a-45ab-8975-9de943d2f790-kube-api-access-fd2hg\") pod \"barbican-worker-7898f695d7-6lw8w\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.865027 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmbrv\" (UniqueName: \"kubernetes.io/projected/416da471-329e-45f8-b786-e8841f575f20-kube-api-access-fmbrv\") pod \"barbican-keystone-listener-86dc4fcb7d-k54hq\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:22 crc kubenswrapper[4914]: I0127 14:06:22.888940 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:23 crc kubenswrapper[4914]: I0127 14:06:23.055213 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:23 crc kubenswrapper[4914]: I0127 14:06:23.081532 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3cfa4b-174e-4958-8bd6-560f1f990c68","Type":"ContainerStarted","Data":"d4c127142b1c3dcc24ccbae5c830289723bb4091bb8bc9ad3c687d551e2957cc"} Jan 27 14:06:23 crc kubenswrapper[4914]: I0127 14:06:23.160509 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:23 crc kubenswrapper[4914]: I0127 14:06:23.241133 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 14:06:23 crc kubenswrapper[4914]: I0127 14:06:23.347648 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xbz4b"] Jan 27 14:06:23 crc kubenswrapper[4914]: I0127 14:06:23.441681 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7898f695d7-6lw8w"] Jan 27 14:06:23 crc kubenswrapper[4914]: I0127 14:06:23.485601 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 14:06:23 crc kubenswrapper[4914]: I0127 14:06:23.597144 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-859447f896-dzzll"] Jan 27 14:06:23 crc kubenswrapper[4914]: I0127 14:06:23.800277 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86dc4fcb7d-k54hq"] Jan 27 14:06:23 crc kubenswrapper[4914]: W0127 14:06:23.809652 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod416da471_329e_45f8_b786_e8841f575f20.slice/crio-831bf0ea499789379def09205f1f3f215ce7753a3d2eded6f7c63b6aa9b98bdd WatchSource:0}: Error finding container 831bf0ea499789379def09205f1f3f215ce7753a3d2eded6f7c63b6aa9b98bdd: Status 404 returned error can't find the container with id 831bf0ea499789379def09205f1f3f215ce7753a3d2eded6f7c63b6aa9b98bdd Jan 27 14:06:23 crc kubenswrapper[4914]: I0127 14:06:23.951982 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-86dc4fcb7d-k54hq"] Jan 27 14:06:23 crc kubenswrapper[4914]: I0127 14:06:23.969019 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-69bc9ddd86-ns2qc"] Jan 27 14:06:23 crc kubenswrapper[4914]: I0127 14:06:23.970637 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:23 crc kubenswrapper[4914]: I0127 14:06:23.982817 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-69bc9ddd86-ns2qc"] Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.047870 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7898f695d7-6lw8w"] Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.054527 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66b37bce-7727-4a01-b0e5-c4df82590c96-config-data-custom\") pod \"barbican-keystone-listener-69bc9ddd86-ns2qc\" (UID: \"66b37bce-7727-4a01-b0e5-c4df82590c96\") " pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.054784 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66b37bce-7727-4a01-b0e5-c4df82590c96-config-data\") pod \"barbican-keystone-listener-69bc9ddd86-ns2qc\" (UID: \"66b37bce-7727-4a01-b0e5-c4df82590c96\") " pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.054961 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66b37bce-7727-4a01-b0e5-c4df82590c96-logs\") pod \"barbican-keystone-listener-69bc9ddd86-ns2qc\" (UID: \"66b37bce-7727-4a01-b0e5-c4df82590c96\") " pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.055151 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66b37bce-7727-4a01-b0e5-c4df82590c96-combined-ca-bundle\") pod \"barbican-keystone-listener-69bc9ddd86-ns2qc\" (UID: \"66b37bce-7727-4a01-b0e5-c4df82590c96\") " pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.055229 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bpft\" (UniqueName: \"kubernetes.io/projected/66b37bce-7727-4a01-b0e5-c4df82590c96-kube-api-access-9bpft\") pod \"barbican-keystone-listener-69bc9ddd86-ns2qc\" (UID: \"66b37bce-7727-4a01-b0e5-c4df82590c96\") " pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.076550 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-879c45d7-8hbrg"] Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.080651 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.097734 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-859447f896-dzzll"] Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.108179 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-879c45d7-8hbrg"] Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.135951 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7c9f4b5684-nv57j"] Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.137529 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.156900 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c9f4b5684-nv57j"] Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.158985 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bpft\" (UniqueName: \"kubernetes.io/projected/66b37bce-7727-4a01-b0e5-c4df82590c96-kube-api-access-9bpft\") pod \"barbican-keystone-listener-69bc9ddd86-ns2qc\" (UID: \"66b37bce-7727-4a01-b0e5-c4df82590c96\") " pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.159051 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2813942-0a24-4127-ab18-2b5031826e2c-config-data-custom\") pod \"barbican-worker-879c45d7-8hbrg\" (UID: \"c2813942-0a24-4127-ab18-2b5031826e2c\") " pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.159140 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66b37bce-7727-4a01-b0e5-c4df82590c96-config-data-custom\") pod \"barbican-keystone-listener-69bc9ddd86-ns2qc\" (UID: \"66b37bce-7727-4a01-b0e5-c4df82590c96\") " pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.159159 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bs8r\" (UniqueName: \"kubernetes.io/projected/c2813942-0a24-4127-ab18-2b5031826e2c-kube-api-access-2bs8r\") pod \"barbican-worker-879c45d7-8hbrg\" (UID: \"c2813942-0a24-4127-ab18-2b5031826e2c\") " pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.159185 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2813942-0a24-4127-ab18-2b5031826e2c-combined-ca-bundle\") pod \"barbican-worker-879c45d7-8hbrg\" (UID: \"c2813942-0a24-4127-ab18-2b5031826e2c\") " pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.159204 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66b37bce-7727-4a01-b0e5-c4df82590c96-config-data\") pod \"barbican-keystone-listener-69bc9ddd86-ns2qc\" (UID: \"66b37bce-7727-4a01-b0e5-c4df82590c96\") " pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.159231 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66b37bce-7727-4a01-b0e5-c4df82590c96-logs\") pod \"barbican-keystone-listener-69bc9ddd86-ns2qc\" (UID: \"66b37bce-7727-4a01-b0e5-c4df82590c96\") " pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.159283 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2813942-0a24-4127-ab18-2b5031826e2c-logs\") pod \"barbican-worker-879c45d7-8hbrg\" (UID: \"c2813942-0a24-4127-ab18-2b5031826e2c\") " pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.159311 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2813942-0a24-4127-ab18-2b5031826e2c-config-data\") pod \"barbican-worker-879c45d7-8hbrg\" (UID: \"c2813942-0a24-4127-ab18-2b5031826e2c\") " pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.159338 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66b37bce-7727-4a01-b0e5-c4df82590c96-combined-ca-bundle\") pod \"barbican-keystone-listener-69bc9ddd86-ns2qc\" (UID: \"66b37bce-7727-4a01-b0e5-c4df82590c96\") " pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.161605 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66b37bce-7727-4a01-b0e5-c4df82590c96-logs\") pod \"barbican-keystone-listener-69bc9ddd86-ns2qc\" (UID: \"66b37bce-7727-4a01-b0e5-c4df82590c96\") " pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.164330 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-859447f896-dzzll" event={"ID":"5509ee4a-07b1-4462-993e-ba8e1569651c","Type":"ContainerStarted","Data":"55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3"} Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.164455 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-859447f896-dzzll" event={"ID":"5509ee4a-07b1-4462-993e-ba8e1569651c","Type":"ContainerStarted","Data":"5f91c7d52ba4815a17078fcbf1a97b68494728a72d26fb5acbfa2614c794c2ec"} Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.168913 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66b37bce-7727-4a01-b0e5-c4df82590c96-config-data-custom\") pod \"barbican-keystone-listener-69bc9ddd86-ns2qc\" (UID: \"66b37bce-7727-4a01-b0e5-c4df82590c96\") " pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.169908 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66b37bce-7727-4a01-b0e5-c4df82590c96-config-data\") pod \"barbican-keystone-listener-69bc9ddd86-ns2qc\" (UID: \"66b37bce-7727-4a01-b0e5-c4df82590c96\") " pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.173310 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xbz4b" event={"ID":"58d81784-ad81-47ce-befb-d2ec09617b1c","Type":"ContainerStarted","Data":"3801ab92ddbcfd68078db723f3d49b66af3f4e8a7e3fe1e0cbdc88d83e0a7e07"} Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.173448 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66b37bce-7727-4a01-b0e5-c4df82590c96-combined-ca-bundle\") pod \"barbican-keystone-listener-69bc9ddd86-ns2qc\" (UID: \"66b37bce-7727-4a01-b0e5-c4df82590c96\") " pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.176089 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7898f695d7-6lw8w" event={"ID":"d3ca6012-ae4a-45ab-8975-9de943d2f790","Type":"ContainerStarted","Data":"22b6a69449be50b1aa84bf1899eff41cf65dd64c71bb204fe9b91007b9a45c53"} Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.177883 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7898f695d7-6lw8w" event={"ID":"d3ca6012-ae4a-45ab-8975-9de943d2f790","Type":"ContainerStarted","Data":"d4f3c8f0c5f22d31ee34a5ee812905ecb1af58af9065e3c6f67bc9ace83c3e97"} Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.177997 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7898f695d7-6lw8w" event={"ID":"d3ca6012-ae4a-45ab-8975-9de943d2f790","Type":"ContainerStarted","Data":"dc08cc567e42c5707091497a86548ebaa1b97c7e4091ddea781aa6d76b894d48"} Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.181193 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" event={"ID":"416da471-329e-45f8-b786-e8841f575f20","Type":"ContainerStarted","Data":"831bf0ea499789379def09205f1f3f215ce7753a3d2eded6f7c63b6aa9b98bdd"} Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.184656 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bpft\" (UniqueName: \"kubernetes.io/projected/66b37bce-7727-4a01-b0e5-c4df82590c96-kube-api-access-9bpft\") pod \"barbican-keystone-listener-69bc9ddd86-ns2qc\" (UID: \"66b37bce-7727-4a01-b0e5-c4df82590c96\") " pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.215908 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7898f695d7-6lw8w" podStartSLOduration=2.215892823 podStartE2EDuration="2.215892823s" podCreationTimestamp="2026-01-27 14:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:24.203040641 +0000 UTC m=+1342.515390756" watchObservedRunningTime="2026-01-27 14:06:24.215892823 +0000 UTC m=+1342.528242908" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.261555 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cb7de31-9af1-452a-a335-c1ebf2876522-logs\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.261633 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2813942-0a24-4127-ab18-2b5031826e2c-logs\") pod \"barbican-worker-879c45d7-8hbrg\" (UID: \"c2813942-0a24-4127-ab18-2b5031826e2c\") " pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.261672 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2813942-0a24-4127-ab18-2b5031826e2c-config-data\") pod \"barbican-worker-879c45d7-8hbrg\" (UID: \"c2813942-0a24-4127-ab18-2b5031826e2c\") " pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.261746 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2813942-0a24-4127-ab18-2b5031826e2c-config-data-custom\") pod \"barbican-worker-879c45d7-8hbrg\" (UID: \"c2813942-0a24-4127-ab18-2b5031826e2c\") " pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.261821 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwgg8\" (UniqueName: \"kubernetes.io/projected/8cb7de31-9af1-452a-a335-c1ebf2876522-kube-api-access-wwgg8\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.261875 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cb7de31-9af1-452a-a335-c1ebf2876522-internal-tls-certs\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.262098 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cb7de31-9af1-452a-a335-c1ebf2876522-config-data-custom\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.262142 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bs8r\" (UniqueName: \"kubernetes.io/projected/c2813942-0a24-4127-ab18-2b5031826e2c-kube-api-access-2bs8r\") pod \"barbican-worker-879c45d7-8hbrg\" (UID: \"c2813942-0a24-4127-ab18-2b5031826e2c\") " pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.262165 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cb7de31-9af1-452a-a335-c1ebf2876522-public-tls-certs\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.262193 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2813942-0a24-4127-ab18-2b5031826e2c-combined-ca-bundle\") pod \"barbican-worker-879c45d7-8hbrg\" (UID: \"c2813942-0a24-4127-ab18-2b5031826e2c\") " pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.262251 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb7de31-9af1-452a-a335-c1ebf2876522-config-data\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.262306 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb7de31-9af1-452a-a335-c1ebf2876522-combined-ca-bundle\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.263322 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2813942-0a24-4127-ab18-2b5031826e2c-logs\") pod \"barbican-worker-879c45d7-8hbrg\" (UID: \"c2813942-0a24-4127-ab18-2b5031826e2c\") " pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.269905 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2813942-0a24-4127-ab18-2b5031826e2c-config-data-custom\") pod \"barbican-worker-879c45d7-8hbrg\" (UID: \"c2813942-0a24-4127-ab18-2b5031826e2c\") " pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.270064 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2813942-0a24-4127-ab18-2b5031826e2c-combined-ca-bundle\") pod \"barbican-worker-879c45d7-8hbrg\" (UID: \"c2813942-0a24-4127-ab18-2b5031826e2c\") " pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.275417 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2813942-0a24-4127-ab18-2b5031826e2c-config-data\") pod \"barbican-worker-879c45d7-8hbrg\" (UID: \"c2813942-0a24-4127-ab18-2b5031826e2c\") " pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.294791 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bs8r\" (UniqueName: \"kubernetes.io/projected/c2813942-0a24-4127-ab18-2b5031826e2c-kube-api-access-2bs8r\") pod \"barbican-worker-879c45d7-8hbrg\" (UID: \"c2813942-0a24-4127-ab18-2b5031826e2c\") " pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.333651 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.364181 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwgg8\" (UniqueName: \"kubernetes.io/projected/8cb7de31-9af1-452a-a335-c1ebf2876522-kube-api-access-wwgg8\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.364230 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cb7de31-9af1-452a-a335-c1ebf2876522-internal-tls-certs\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.364280 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cb7de31-9af1-452a-a335-c1ebf2876522-config-data-custom\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.364303 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cb7de31-9af1-452a-a335-c1ebf2876522-public-tls-certs\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.364359 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb7de31-9af1-452a-a335-c1ebf2876522-config-data\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.364400 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb7de31-9af1-452a-a335-c1ebf2876522-combined-ca-bundle\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.364423 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cb7de31-9af1-452a-a335-c1ebf2876522-logs\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.367010 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cb7de31-9af1-452a-a335-c1ebf2876522-logs\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.374154 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cb7de31-9af1-452a-a335-c1ebf2876522-public-tls-certs\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.376480 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cb7de31-9af1-452a-a335-c1ebf2876522-config-data-custom\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.381108 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb7de31-9af1-452a-a335-c1ebf2876522-combined-ca-bundle\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.381374 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb7de31-9af1-452a-a335-c1ebf2876522-config-data\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.384404 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwgg8\" (UniqueName: \"kubernetes.io/projected/8cb7de31-9af1-452a-a335-c1ebf2876522-kube-api-access-wwgg8\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.384415 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cb7de31-9af1-452a-a335-c1ebf2876522-internal-tls-certs\") pod \"barbican-api-7c9f4b5684-nv57j\" (UID: \"8cb7de31-9af1-452a-a335-c1ebf2876522\") " pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.420200 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-879c45d7-8hbrg" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.481956 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:24 crc kubenswrapper[4914]: I0127 14:06:24.884537 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-69bc9ddd86-ns2qc"] Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.112652 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c9f4b5684-nv57j"] Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.123656 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-879c45d7-8hbrg"] Jan 27 14:06:25 crc kubenswrapper[4914]: W0127 14:06:25.133036 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cb7de31_9af1_452a_a335_c1ebf2876522.slice/crio-1688ee86d785951ae96204e8dfcbc03f154f6a73227e1ae55f0fadf0995400c8 WatchSource:0}: Error finding container 1688ee86d785951ae96204e8dfcbc03f154f6a73227e1ae55f0fadf0995400c8: Status 404 returned error can't find the container with id 1688ee86d785951ae96204e8dfcbc03f154f6a73227e1ae55f0fadf0995400c8 Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.230128 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-879c45d7-8hbrg" event={"ID":"c2813942-0a24-4127-ab18-2b5031826e2c","Type":"ContainerStarted","Data":"82afe4c24844870b00e9ebbe39714bfbb3da7ea0b842daf89117f70b5bc0ec68"} Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.241631 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-859447f896-dzzll" event={"ID":"5509ee4a-07b1-4462-993e-ba8e1569651c","Type":"ContainerStarted","Data":"83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070"} Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.241826 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-859447f896-dzzll" podUID="5509ee4a-07b1-4462-993e-ba8e1569651c" containerName="barbican-api-log" containerID="cri-o://55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3" gracePeriod=30 Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.242039 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.242088 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.242033 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-859447f896-dzzll" podUID="5509ee4a-07b1-4462-993e-ba8e1569651c" containerName="barbican-api" containerID="cri-o://83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070" gracePeriod=30 Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.249942 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" event={"ID":"66b37bce-7727-4a01-b0e5-c4df82590c96","Type":"ContainerStarted","Data":"f67bad4d9025618d64bc721e7c8dc52868c2e614a943fb23a4eeb3e533cc5ce3"} Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.261143 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" event={"ID":"416da471-329e-45f8-b786-e8841f575f20","Type":"ContainerStarted","Data":"6abc0f205dd4351533405721c80ece7917533d6d8e9a925eb52fc58ac06a033e"} Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.261192 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" event={"ID":"416da471-329e-45f8-b786-e8841f575f20","Type":"ContainerStarted","Data":"bfd890dd9d32f4b7a29c9943572309e57caf863269314ecf44b18da8b8372e40"} Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.261335 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" podUID="416da471-329e-45f8-b786-e8841f575f20" containerName="barbican-keystone-listener-log" containerID="cri-o://6abc0f205dd4351533405721c80ece7917533d6d8e9a925eb52fc58ac06a033e" gracePeriod=30 Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.261422 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" podUID="416da471-329e-45f8-b786-e8841f575f20" containerName="barbican-keystone-listener" containerID="cri-o://bfd890dd9d32f4b7a29c9943572309e57caf863269314ecf44b18da8b8372e40" gracePeriod=30 Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.268373 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-859447f896-dzzll" podStartSLOduration=3.268358127 podStartE2EDuration="3.268358127s" podCreationTimestamp="2026-01-27 14:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:25.266114565 +0000 UTC m=+1343.578464650" watchObservedRunningTime="2026-01-27 14:06:25.268358127 +0000 UTC m=+1343.580708202" Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.268650 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c9f4b5684-nv57j" event={"ID":"8cb7de31-9af1-452a-a335-c1ebf2876522","Type":"ContainerStarted","Data":"1688ee86d785951ae96204e8dfcbc03f154f6a73227e1ae55f0fadf0995400c8"} Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.298681 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" podStartSLOduration=3.298661677 podStartE2EDuration="3.298661677s" podCreationTimestamp="2026-01-27 14:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:25.292432707 +0000 UTC m=+1343.604782792" watchObservedRunningTime="2026-01-27 14:06:25.298661677 +0000 UTC m=+1343.611011772" Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.299210 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3cfa4b-174e-4958-8bd6-560f1f990c68","Type":"ContainerStarted","Data":"c8077a6806c4f412de26de667031b029ede7bb85f8579b330ac54a9c2cf231a0"} Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.299384 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7898f695d7-6lw8w" podUID="d3ca6012-ae4a-45ab-8975-9de943d2f790" containerName="barbican-worker-log" containerID="cri-o://d4f3c8f0c5f22d31ee34a5ee812905ecb1af58af9065e3c6f67bc9ace83c3e97" gracePeriod=30 Jan 27 14:06:25 crc kubenswrapper[4914]: I0127 14:06:25.299515 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7898f695d7-6lw8w" podUID="d3ca6012-ae4a-45ab-8975-9de943d2f790" containerName="barbican-worker" containerID="cri-o://22b6a69449be50b1aa84bf1899eff41cf65dd64c71bb204fe9b91007b9a45c53" gracePeriod=30 Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.342456 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" event={"ID":"66b37bce-7727-4a01-b0e5-c4df82590c96","Type":"ContainerStarted","Data":"29e860d374fcec0821ef5101e5b16de124aac1618f6bd9c36b8fcabcd534d8d6"} Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.342703 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" event={"ID":"66b37bce-7727-4a01-b0e5-c4df82590c96","Type":"ContainerStarted","Data":"c0c11671e70073a0eaf498f6675088f39f789c3322d2139da5ad9e0dc0c5f3cb"} Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.359696 4914 generic.go:334] "Generic (PLEG): container finished" podID="d3ca6012-ae4a-45ab-8975-9de943d2f790" containerID="d4f3c8f0c5f22d31ee34a5ee812905ecb1af58af9065e3c6f67bc9ace83c3e97" exitCode=143 Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.359771 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7898f695d7-6lw8w" event={"ID":"d3ca6012-ae4a-45ab-8975-9de943d2f790","Type":"ContainerDied","Data":"d4f3c8f0c5f22d31ee34a5ee812905ecb1af58af9065e3c6f67bc9ace83c3e97"} Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.363123 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c9f4b5684-nv57j" event={"ID":"8cb7de31-9af1-452a-a335-c1ebf2876522","Type":"ContainerStarted","Data":"66274b75d33f756068f29cbd6921c1c6dd259ae833c4dcdb07dc13e4ae65f38e"} Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.363168 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c9f4b5684-nv57j" event={"ID":"8cb7de31-9af1-452a-a335-c1ebf2876522","Type":"ContainerStarted","Data":"2724c8c3159756ae3097b597de35bb46750e065c14c16dd1e5b2cf2352cbb881"} Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.363939 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.363972 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.364076 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.379186 4914 generic.go:334] "Generic (PLEG): container finished" podID="416da471-329e-45f8-b786-e8841f575f20" containerID="6abc0f205dd4351533405721c80ece7917533d6d8e9a925eb52fc58ac06a033e" exitCode=143 Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.379271 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" event={"ID":"416da471-329e-45f8-b786-e8841f575f20","Type":"ContainerDied","Data":"6abc0f205dd4351533405721c80ece7917533d6d8e9a925eb52fc58ac06a033e"} Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.386036 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3cfa4b-174e-4958-8bd6-560f1f990c68","Type":"ContainerStarted","Data":"835033792ffcb529a295ad0bf0c31ed4f5870992c6ebe003f06c1532e98ee033"} Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.394698 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-69bc9ddd86-ns2qc" podStartSLOduration=3.394674804 podStartE2EDuration="3.394674804s" podCreationTimestamp="2026-01-27 14:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:26.370315676 +0000 UTC m=+1344.682665751" watchObservedRunningTime="2026-01-27 14:06:26.394674804 +0000 UTC m=+1344.707024899" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.405423 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-879c45d7-8hbrg" event={"ID":"c2813942-0a24-4127-ab18-2b5031826e2c","Type":"ContainerStarted","Data":"ce4c208a056be97f9dec513d64693f9d2fe5d19b353c936649f6725b0ddce9de"} Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.405474 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-879c45d7-8hbrg" event={"ID":"c2813942-0a24-4127-ab18-2b5031826e2c","Type":"ContainerStarted","Data":"54954e672bae5260bdba0e83a19dacf91fe8a0388dc82128d96f47312dd1b95f"} Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.415177 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7c9f4b5684-nv57j" podStartSLOduration=2.415148945 podStartE2EDuration="2.415148945s" podCreationTimestamp="2026-01-27 14:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:26.392014691 +0000 UTC m=+1344.704364776" watchObservedRunningTime="2026-01-27 14:06:26.415148945 +0000 UTC m=+1344.727499030" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.423173 4914 generic.go:334] "Generic (PLEG): container finished" podID="5509ee4a-07b1-4462-993e-ba8e1569651c" containerID="83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070" exitCode=0 Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.423205 4914 generic.go:334] "Generic (PLEG): container finished" podID="5509ee4a-07b1-4462-993e-ba8e1569651c" containerID="55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3" exitCode=143 Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.423228 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-859447f896-dzzll" event={"ID":"5509ee4a-07b1-4462-993e-ba8e1569651c","Type":"ContainerDied","Data":"83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070"} Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.423262 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-859447f896-dzzll" event={"ID":"5509ee4a-07b1-4462-993e-ba8e1569651c","Type":"ContainerDied","Data":"55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3"} Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.423277 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-859447f896-dzzll" event={"ID":"5509ee4a-07b1-4462-993e-ba8e1569651c","Type":"ContainerDied","Data":"5f91c7d52ba4815a17078fcbf1a97b68494728a72d26fb5acbfa2614c794c2ec"} Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.423297 4914 scope.go:117] "RemoveContainer" containerID="83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.423429 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-859447f896-dzzll" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.469933 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-c4dfd54dd-q4t9s"] Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.470207 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" podUID="b7a3f205-3ea1-491b-af09-8e2ad479e0a5" containerName="barbican-keystone-listener-log" containerID="cri-o://38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c" gracePeriod=30 Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.470620 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" podUID="b7a3f205-3ea1-491b-af09-8e2ad479e0a5" containerName="barbican-keystone-listener" containerID="cri-o://443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671" gracePeriod=30 Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.513402 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-879c45d7-8hbrg" podStartSLOduration=2.5133805479999998 podStartE2EDuration="2.513380548s" podCreationTimestamp="2026-01-27 14:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:26.459716307 +0000 UTC m=+1344.772066402" watchObservedRunningTime="2026-01-27 14:06:26.513380548 +0000 UTC m=+1344.825730633" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.525121 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5b88564dfc-pk2d6"] Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.525372 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5b88564dfc-pk2d6" podUID="87cff560-6d78-4257-b80f-16e6172fc629" containerName="barbican-worker-log" containerID="cri-o://1e43ccef9ed27cc09e85bb71803810d7f2a701d33b94b88b7c80b4666ba13daa" gracePeriod=30 Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.525850 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5b88564dfc-pk2d6" podUID="87cff560-6d78-4257-b80f-16e6172fc629" containerName="barbican-worker" containerID="cri-o://3f984ff47bc02ee131274ec13f98b917eed6d7c0984b5f4ad9ed23f20d0a300b" gracePeriod=30 Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.526619 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-internal-tls-certs\") pod \"5509ee4a-07b1-4462-993e-ba8e1569651c\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.526670 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-config-data\") pod \"5509ee4a-07b1-4462-993e-ba8e1569651c\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.527292 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-public-tls-certs\") pod \"5509ee4a-07b1-4462-993e-ba8e1569651c\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.527381 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5509ee4a-07b1-4462-993e-ba8e1569651c-logs\") pod \"5509ee4a-07b1-4462-993e-ba8e1569651c\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.527516 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgfw8\" (UniqueName: \"kubernetes.io/projected/5509ee4a-07b1-4462-993e-ba8e1569651c-kube-api-access-jgfw8\") pod \"5509ee4a-07b1-4462-993e-ba8e1569651c\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.527556 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-config-data-custom\") pod \"5509ee4a-07b1-4462-993e-ba8e1569651c\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.527622 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-combined-ca-bundle\") pod \"5509ee4a-07b1-4462-993e-ba8e1569651c\" (UID: \"5509ee4a-07b1-4462-993e-ba8e1569651c\") " Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.541140 4914 scope.go:117] "RemoveContainer" containerID="55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.542568 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5509ee4a-07b1-4462-993e-ba8e1569651c-logs" (OuterVolumeSpecName: "logs") pod "5509ee4a-07b1-4462-993e-ba8e1569651c" (UID: "5509ee4a-07b1-4462-993e-ba8e1569651c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.543013 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5509ee4a-07b1-4462-993e-ba8e1569651c-kube-api-access-jgfw8" (OuterVolumeSpecName: "kube-api-access-jgfw8") pod "5509ee4a-07b1-4462-993e-ba8e1569651c" (UID: "5509ee4a-07b1-4462-993e-ba8e1569651c"). InnerVolumeSpecName "kube-api-access-jgfw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.548863 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5509ee4a-07b1-4462-993e-ba8e1569651c" (UID: "5509ee4a-07b1-4462-993e-ba8e1569651c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.573356 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5509ee4a-07b1-4462-993e-ba8e1569651c" (UID: "5509ee4a-07b1-4462-993e-ba8e1569651c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.578073 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.578134 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.604544 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-config-data" (OuterVolumeSpecName: "config-data") pod "5509ee4a-07b1-4462-993e-ba8e1569651c" (UID: "5509ee4a-07b1-4462-993e-ba8e1569651c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.627240 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5509ee4a-07b1-4462-993e-ba8e1569651c" (UID: "5509ee4a-07b1-4462-993e-ba8e1569651c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.628899 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.630592 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgfw8\" (UniqueName: \"kubernetes.io/projected/5509ee4a-07b1-4462-993e-ba8e1569651c-kube-api-access-jgfw8\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.630626 4914 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.630640 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.630654 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.630666 4914 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.630677 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5509ee4a-07b1-4462-993e-ba8e1569651c-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.636590 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.686413 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5509ee4a-07b1-4462-993e-ba8e1569651c" (UID: "5509ee4a-07b1-4462-993e-ba8e1569651c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.733099 4914 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5509ee4a-07b1-4462-993e-ba8e1569651c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.805281 4914 scope.go:117] "RemoveContainer" containerID="83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070" Jan 27 14:06:26 crc kubenswrapper[4914]: E0127 14:06:26.807093 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070\": container with ID starting with 83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070 not found: ID does not exist" containerID="83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.807140 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070"} err="failed to get container status \"83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070\": rpc error: code = NotFound desc = could not find container \"83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070\": container with ID starting with 83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070 not found: ID does not exist" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.807170 4914 scope.go:117] "RemoveContainer" containerID="55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3" Jan 27 14:06:26 crc kubenswrapper[4914]: E0127 14:06:26.807593 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3\": container with ID starting with 55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3 not found: ID does not exist" containerID="55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.807632 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3"} err="failed to get container status \"55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3\": rpc error: code = NotFound desc = could not find container \"55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3\": container with ID starting with 55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3 not found: ID does not exist" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.807666 4914 scope.go:117] "RemoveContainer" containerID="83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.808000 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070"} err="failed to get container status \"83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070\": rpc error: code = NotFound desc = could not find container \"83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070\": container with ID starting with 83f6fd64498fcb790fb06c253c0aff20eeb8ee4b1da54e1ace89cbe2286e9070 not found: ID does not exist" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.808021 4914 scope.go:117] "RemoveContainer" containerID="55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.808871 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3"} err="failed to get container status \"55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3\": rpc error: code = NotFound desc = could not find container \"55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3\": container with ID starting with 55f6b2f29fa5a51025ef8b113ff721eb6844323d2c42bf53bb9bbaf30075aaa3 not found: ID does not exist" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.810016 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-859447f896-dzzll"] Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.823127 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-859447f896-dzzll"] Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.899878 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6669b7ffb9-n8php"] Jan 27 14:06:26 crc kubenswrapper[4914]: E0127 14:06:26.900270 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5509ee4a-07b1-4462-993e-ba8e1569651c" containerName="barbican-api-log" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.900287 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="5509ee4a-07b1-4462-993e-ba8e1569651c" containerName="barbican-api-log" Jan 27 14:06:26 crc kubenswrapper[4914]: E0127 14:06:26.900299 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5509ee4a-07b1-4462-993e-ba8e1569651c" containerName="barbican-api" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.900306 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="5509ee4a-07b1-4462-993e-ba8e1569651c" containerName="barbican-api" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.900479 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="5509ee4a-07b1-4462-993e-ba8e1569651c" containerName="barbican-api-log" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.900497 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="5509ee4a-07b1-4462-993e-ba8e1569651c" containerName="barbican-api" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.901354 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:26 crc kubenswrapper[4914]: I0127 14:06:26.921762 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6669b7ffb9-n8php"] Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.046773 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-httpd-config\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.046909 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kldcb\" (UniqueName: \"kubernetes.io/projected/1b9b723f-e648-4f12-86f7-d453e000a46e-kube-api-access-kldcb\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.046951 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-config\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.046972 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-public-tls-certs\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.047138 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-combined-ca-bundle\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.047343 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-ovndb-tls-certs\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.047444 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-internal-tls-certs\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.178123 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-ovndb-tls-certs\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.189144 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-internal-tls-certs\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.189943 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-httpd-config\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.191949 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kldcb\" (UniqueName: \"kubernetes.io/projected/1b9b723f-e648-4f12-86f7-d453e000a46e-kube-api-access-kldcb\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.192008 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-config\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.192034 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-public-tls-certs\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.192171 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-combined-ca-bundle\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.207247 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-httpd-config\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.207316 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-combined-ca-bundle\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.207329 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-public-tls-certs\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.207467 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-config\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.210349 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-ovndb-tls-certs\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.222963 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kldcb\" (UniqueName: \"kubernetes.io/projected/1b9b723f-e648-4f12-86f7-d453e000a46e-kube-api-access-kldcb\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.225266 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b9b723f-e648-4f12-86f7-d453e000a46e-internal-tls-certs\") pod \"neutron-6669b7ffb9-n8php\" (UID: \"1b9b723f-e648-4f12-86f7-d453e000a46e\") " pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.462902 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.478187 4914 generic.go:334] "Generic (PLEG): container finished" podID="87cff560-6d78-4257-b80f-16e6172fc629" containerID="1e43ccef9ed27cc09e85bb71803810d7f2a701d33b94b88b7c80b4666ba13daa" exitCode=143 Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.478326 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b88564dfc-pk2d6" event={"ID":"87cff560-6d78-4257-b80f-16e6172fc629","Type":"ContainerDied","Data":"1e43ccef9ed27cc09e85bb71803810d7f2a701d33b94b88b7c80b4666ba13daa"} Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.508718 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-config-data-custom\") pod \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.508820 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-config-data\") pod \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.508862 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-logs\") pod \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.508931 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdj7z\" (UniqueName: \"kubernetes.io/projected/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-kube-api-access-jdj7z\") pod \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.508987 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-combined-ca-bundle\") pod \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\" (UID: \"b7a3f205-3ea1-491b-af09-8e2ad479e0a5\") " Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.510150 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-logs" (OuterVolumeSpecName: "logs") pod "b7a3f205-3ea1-491b-af09-8e2ad479e0a5" (UID: "b7a3f205-3ea1-491b-af09-8e2ad479e0a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.519103 4914 generic.go:334] "Generic (PLEG): container finished" podID="b7a3f205-3ea1-491b-af09-8e2ad479e0a5" containerID="443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671" exitCode=0 Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.519145 4914 generic.go:334] "Generic (PLEG): container finished" podID="b7a3f205-3ea1-491b-af09-8e2ad479e0a5" containerID="38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c" exitCode=143 Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.520152 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.520621 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" event={"ID":"b7a3f205-3ea1-491b-af09-8e2ad479e0a5","Type":"ContainerDied","Data":"443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671"} Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.520651 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" event={"ID":"b7a3f205-3ea1-491b-af09-8e2ad479e0a5","Type":"ContainerDied","Data":"38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c"} Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.520664 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c4dfd54dd-q4t9s" event={"ID":"b7a3f205-3ea1-491b-af09-8e2ad479e0a5","Type":"ContainerDied","Data":"75e5259d505fc3660d895bf43d44e53e745881d2ffe220b8530e398971255ebd"} Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.520679 4914 scope.go:117] "RemoveContainer" containerID="443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.521789 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.521909 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.521975 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.528634 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.528698 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.533864 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b7a3f205-3ea1-491b-af09-8e2ad479e0a5" (UID: "b7a3f205-3ea1-491b-af09-8e2ad479e0a5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.541560 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-kube-api-access-jdj7z" (OuterVolumeSpecName: "kube-api-access-jdj7z") pod "b7a3f205-3ea1-491b-af09-8e2ad479e0a5" (UID: "b7a3f205-3ea1-491b-af09-8e2ad479e0a5"). InnerVolumeSpecName "kube-api-access-jdj7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.611224 4914 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.611572 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.611585 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdj7z\" (UniqueName: \"kubernetes.io/projected/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-kube-api-access-jdj7z\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.653072 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7a3f205-3ea1-491b-af09-8e2ad479e0a5" (UID: "b7a3f205-3ea1-491b-af09-8e2ad479e0a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.676112 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-config-data" (OuterVolumeSpecName: "config-data") pod "b7a3f205-3ea1-491b-af09-8e2ad479e0a5" (UID: "b7a3f205-3ea1-491b-af09-8e2ad479e0a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.696004 4914 scope.go:117] "RemoveContainer" containerID="38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.713264 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.713302 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7a3f205-3ea1-491b-af09-8e2ad479e0a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.729615 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.731694 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.858235 4914 scope.go:117] "RemoveContainer" containerID="443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671" Jan 27 14:06:27 crc kubenswrapper[4914]: E0127 14:06:27.859136 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671\": container with ID starting with 443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671 not found: ID does not exist" containerID="443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.859174 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671"} err="failed to get container status \"443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671\": rpc error: code = NotFound desc = could not find container \"443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671\": container with ID starting with 443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671 not found: ID does not exist" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.859204 4914 scope.go:117] "RemoveContainer" containerID="38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c" Jan 27 14:06:27 crc kubenswrapper[4914]: E0127 14:06:27.859514 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c\": container with ID starting with 38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c not found: ID does not exist" containerID="38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.859539 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c"} err="failed to get container status \"38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c\": rpc error: code = NotFound desc = could not find container \"38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c\": container with ID starting with 38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c not found: ID does not exist" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.859556 4914 scope.go:117] "RemoveContainer" containerID="443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.860576 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671"} err="failed to get container status \"443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671\": rpc error: code = NotFound desc = could not find container \"443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671\": container with ID starting with 443f51f47aed3ab53947bbf9f179b757bb5ef838ec776dcc8d725ad2ca7f8671 not found: ID does not exist" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.860610 4914 scope.go:117] "RemoveContainer" containerID="38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.860937 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c"} err="failed to get container status \"38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c\": rpc error: code = NotFound desc = could not find container \"38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c\": container with ID starting with 38919fa71424522f344b093627465ccce940ab05e7911e961ce9c0d601071e5c not found: ID does not exist" Jan 27 14:06:27 crc kubenswrapper[4914]: I0127 14:06:27.949804 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-c4dfd54dd-q4t9s"] Jan 27 14:06:28 crc kubenswrapper[4914]: I0127 14:06:28.001273 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-c4dfd54dd-q4t9s"] Jan 27 14:06:28 crc kubenswrapper[4914]: I0127 14:06:28.277872 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6669b7ffb9-n8php"] Jan 27 14:06:28 crc kubenswrapper[4914]: I0127 14:06:28.320731 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5509ee4a-07b1-4462-993e-ba8e1569651c" path="/var/lib/kubelet/pods/5509ee4a-07b1-4462-993e-ba8e1569651c/volumes" Jan 27 14:06:28 crc kubenswrapper[4914]: I0127 14:06:28.322157 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7a3f205-3ea1-491b-af09-8e2ad479e0a5" path="/var/lib/kubelet/pods/b7a3f205-3ea1-491b-af09-8e2ad479e0a5/volumes" Jan 27 14:06:28 crc kubenswrapper[4914]: I0127 14:06:28.545433 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3cfa4b-174e-4958-8bd6-560f1f990c68","Type":"ContainerStarted","Data":"504e00eeef20e7d7416750d336fa826d2664da22457b5b2f851e6f1d71be361b"} Jan 27 14:06:28 crc kubenswrapper[4914]: I0127 14:06:28.545587 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerName="ceilometer-central-agent" containerID="cri-o://d4c127142b1c3dcc24ccbae5c830289723bb4091bb8bc9ad3c687d551e2957cc" gracePeriod=30 Jan 27 14:06:28 crc kubenswrapper[4914]: I0127 14:06:28.545859 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:06:28 crc kubenswrapper[4914]: I0127 14:06:28.546036 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerName="proxy-httpd" containerID="cri-o://504e00eeef20e7d7416750d336fa826d2664da22457b5b2f851e6f1d71be361b" gracePeriod=30 Jan 27 14:06:28 crc kubenswrapper[4914]: I0127 14:06:28.546173 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerName="sg-core" containerID="cri-o://835033792ffcb529a295ad0bf0c31ed4f5870992c6ebe003f06c1532e98ee033" gracePeriod=30 Jan 27 14:06:28 crc kubenswrapper[4914]: I0127 14:06:28.546237 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerName="ceilometer-notification-agent" containerID="cri-o://c8077a6806c4f412de26de667031b029ede7bb85f8579b330ac54a9c2cf231a0" gracePeriod=30 Jan 27 14:06:28 crc kubenswrapper[4914]: I0127 14:06:28.558667 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6669b7ffb9-n8php" event={"ID":"1b9b723f-e648-4f12-86f7-d453e000a46e","Type":"ContainerStarted","Data":"7343bef40130b3affd487a32667eb9670cb0a7b09d4561876c9b713c484a3a21"} Jan 27 14:06:28 crc kubenswrapper[4914]: I0127 14:06:28.559263 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 14:06:28 crc kubenswrapper[4914]: I0127 14:06:28.560728 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 14:06:28 crc kubenswrapper[4914]: I0127 14:06:28.586815 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.069081778 podStartE2EDuration="8.586792661s" podCreationTimestamp="2026-01-27 14:06:20 +0000 UTC" firstStartedPulling="2026-01-27 14:06:21.371162461 +0000 UTC m=+1339.683512546" lastFinishedPulling="2026-01-27 14:06:27.888873344 +0000 UTC m=+1346.201223429" observedRunningTime="2026-01-27 14:06:28.58058123 +0000 UTC m=+1346.892931315" watchObservedRunningTime="2026-01-27 14:06:28.586792661 +0000 UTC m=+1346.899142756" Jan 27 14:06:29 crc kubenswrapper[4914]: I0127 14:06:29.576696 4914 generic.go:334] "Generic (PLEG): container finished" podID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerID="504e00eeef20e7d7416750d336fa826d2664da22457b5b2f851e6f1d71be361b" exitCode=0 Jan 27 14:06:29 crc kubenswrapper[4914]: I0127 14:06:29.577014 4914 generic.go:334] "Generic (PLEG): container finished" podID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerID="835033792ffcb529a295ad0bf0c31ed4f5870992c6ebe003f06c1532e98ee033" exitCode=2 Jan 27 14:06:29 crc kubenswrapper[4914]: I0127 14:06:29.577024 4914 generic.go:334] "Generic (PLEG): container finished" podID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerID="c8077a6806c4f412de26de667031b029ede7bb85f8579b330ac54a9c2cf231a0" exitCode=0 Jan 27 14:06:29 crc kubenswrapper[4914]: I0127 14:06:29.577064 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3cfa4b-174e-4958-8bd6-560f1f990c68","Type":"ContainerDied","Data":"504e00eeef20e7d7416750d336fa826d2664da22457b5b2f851e6f1d71be361b"} Jan 27 14:06:29 crc kubenswrapper[4914]: I0127 14:06:29.577091 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3cfa4b-174e-4958-8bd6-560f1f990c68","Type":"ContainerDied","Data":"835033792ffcb529a295ad0bf0c31ed4f5870992c6ebe003f06c1532e98ee033"} Jan 27 14:06:29 crc kubenswrapper[4914]: I0127 14:06:29.577101 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3cfa4b-174e-4958-8bd6-560f1f990c68","Type":"ContainerDied","Data":"c8077a6806c4f412de26de667031b029ede7bb85f8579b330ac54a9c2cf231a0"} Jan 27 14:06:29 crc kubenswrapper[4914]: I0127 14:06:29.581012 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6669b7ffb9-n8php" event={"ID":"1b9b723f-e648-4f12-86f7-d453e000a46e","Type":"ContainerStarted","Data":"dcc2dbd226e7a21b547e51ecc676243df33d0c7add96c55e398023c5b76a443d"} Jan 27 14:06:29 crc kubenswrapper[4914]: I0127 14:06:29.581061 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6669b7ffb9-n8php" event={"ID":"1b9b723f-e648-4f12-86f7-d453e000a46e","Type":"ContainerStarted","Data":"7e95ab32bf7047843b7db1f325f2969767b1333282cd48768aa5836708322d3e"} Jan 27 14:06:29 crc kubenswrapper[4914]: I0127 14:06:29.581125 4914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:06:29 crc kubenswrapper[4914]: I0127 14:06:29.581133 4914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:06:29 crc kubenswrapper[4914]: I0127 14:06:29.581315 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:29 crc kubenswrapper[4914]: I0127 14:06:29.613216 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6669b7ffb9-n8php" podStartSLOduration=3.61319786 podStartE2EDuration="3.61319786s" podCreationTimestamp="2026-01-27 14:06:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:29.604191794 +0000 UTC m=+1347.916541879" watchObservedRunningTime="2026-01-27 14:06:29.61319786 +0000 UTC m=+1347.925547945" Jan 27 14:06:30 crc kubenswrapper[4914]: I0127 14:06:30.480764 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 14:06:30 crc kubenswrapper[4914]: I0127 14:06:30.503678 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 14:06:30 crc kubenswrapper[4914]: I0127 14:06:30.597293 4914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:06:30 crc kubenswrapper[4914]: I0127 14:06:30.597322 4914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:06:31 crc kubenswrapper[4914]: I0127 14:06:31.339046 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 14:06:31 crc kubenswrapper[4914]: I0127 14:06:31.342653 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.173105 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6d88d8dfdb-6svwq"] Jan 27 14:06:33 crc kubenswrapper[4914]: E0127 14:06:33.173624 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7a3f205-3ea1-491b-af09-8e2ad479e0a5" containerName="barbican-keystone-listener" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.173640 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7a3f205-3ea1-491b-af09-8e2ad479e0a5" containerName="barbican-keystone-listener" Jan 27 14:06:33 crc kubenswrapper[4914]: E0127 14:06:33.173672 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7a3f205-3ea1-491b-af09-8e2ad479e0a5" containerName="barbican-keystone-listener-log" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.173681 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7a3f205-3ea1-491b-af09-8e2ad479e0a5" containerName="barbican-keystone-listener-log" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.173909 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7a3f205-3ea1-491b-af09-8e2ad479e0a5" containerName="barbican-keystone-listener-log" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.173932 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7a3f205-3ea1-491b-af09-8e2ad479e0a5" containerName="barbican-keystone-listener" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.175137 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.189660 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6d88d8dfdb-6svwq"] Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.247059 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56fee704-d63c-4264-a135-38cb14dca70f-internal-tls-certs\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.247129 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56fee704-d63c-4264-a135-38cb14dca70f-scripts\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.247157 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56fee704-d63c-4264-a135-38cb14dca70f-public-tls-certs\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.247189 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56fee704-d63c-4264-a135-38cb14dca70f-config-data\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.247221 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz92c\" (UniqueName: \"kubernetes.io/projected/56fee704-d63c-4264-a135-38cb14dca70f-kube-api-access-mz92c\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.247263 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fee704-d63c-4264-a135-38cb14dca70f-combined-ca-bundle\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.247305 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56fee704-d63c-4264-a135-38cb14dca70f-logs\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.349332 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56fee704-d63c-4264-a135-38cb14dca70f-logs\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.349521 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56fee704-d63c-4264-a135-38cb14dca70f-internal-tls-certs\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.349573 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56fee704-d63c-4264-a135-38cb14dca70f-scripts\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.349598 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56fee704-d63c-4264-a135-38cb14dca70f-public-tls-certs\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.349630 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56fee704-d63c-4264-a135-38cb14dca70f-config-data\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.349663 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz92c\" (UniqueName: \"kubernetes.io/projected/56fee704-d63c-4264-a135-38cb14dca70f-kube-api-access-mz92c\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.349710 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fee704-d63c-4264-a135-38cb14dca70f-combined-ca-bundle\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.349952 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56fee704-d63c-4264-a135-38cb14dca70f-logs\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.358029 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56fee704-d63c-4264-a135-38cb14dca70f-internal-tls-certs\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.358252 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56fee704-d63c-4264-a135-38cb14dca70f-public-tls-certs\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.359644 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56fee704-d63c-4264-a135-38cb14dca70f-scripts\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.360288 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56fee704-d63c-4264-a135-38cb14dca70f-config-data\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.378709 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fee704-d63c-4264-a135-38cb14dca70f-combined-ca-bundle\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.381608 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz92c\" (UniqueName: \"kubernetes.io/projected/56fee704-d63c-4264-a135-38cb14dca70f-kube-api-access-mz92c\") pod \"placement-6d88d8dfdb-6svwq\" (UID: \"56fee704-d63c-4264-a135-38cb14dca70f\") " pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.527029 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.638880 4914 generic.go:334] "Generic (PLEG): container finished" podID="87cff560-6d78-4257-b80f-16e6172fc629" containerID="3f984ff47bc02ee131274ec13f98b917eed6d7c0984b5f4ad9ed23f20d0a300b" exitCode=0 Jan 27 14:06:33 crc kubenswrapper[4914]: I0127 14:06:33.638925 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b88564dfc-pk2d6" event={"ID":"87cff560-6d78-4257-b80f-16e6172fc629","Type":"ContainerDied","Data":"3f984ff47bc02ee131274ec13f98b917eed6d7c0984b5f4ad9ed23f20d0a300b"} Jan 27 14:06:36 crc kubenswrapper[4914]: I0127 14:06:36.474449 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:36 crc kubenswrapper[4914]: I0127 14:06:36.538771 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c9f4b5684-nv57j" Jan 27 14:06:36 crc kubenswrapper[4914]: I0127 14:06:36.612757 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-845f6ddb76-569qx"] Jan 27 14:06:36 crc kubenswrapper[4914]: I0127 14:06:36.612993 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-845f6ddb76-569qx" podUID="20138273-cae3-4cc1-960f-f861eca72126" containerName="barbican-api-log" containerID="cri-o://d66e8fa0cb9b6228361d9ce732e443af3fc91ee40165c73befc436dc06b20719" gracePeriod=30 Jan 27 14:06:36 crc kubenswrapper[4914]: I0127 14:06:36.613423 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-845f6ddb76-569qx" podUID="20138273-cae3-4cc1-960f-f861eca72126" containerName="barbican-api" containerID="cri-o://4f8600ee4babbcb896697bc14567b430422304335b270b1b132a10c2333c2e66" gracePeriod=30 Jan 27 14:06:37 crc kubenswrapper[4914]: I0127 14:06:37.681236 4914 generic.go:334] "Generic (PLEG): container finished" podID="20138273-cae3-4cc1-960f-f861eca72126" containerID="d66e8fa0cb9b6228361d9ce732e443af3fc91ee40165c73befc436dc06b20719" exitCode=143 Jan 27 14:06:37 crc kubenswrapper[4914]: I0127 14:06:37.681338 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-845f6ddb76-569qx" event={"ID":"20138273-cae3-4cc1-960f-f861eca72126","Type":"ContainerDied","Data":"d66e8fa0cb9b6228361d9ce732e443af3fc91ee40165c73befc436dc06b20719"} Jan 27 14:06:37 crc kubenswrapper[4914]: I0127 14:06:37.690655 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:06:37 crc kubenswrapper[4914]: I0127 14:06:37.690718 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:06:37 crc kubenswrapper[4914]: I0127 14:06:37.690767 4914 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 14:06:37 crc kubenswrapper[4914]: I0127 14:06:37.691630 4914 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7c21b1cd9cda80b642f46a096fe84b98a11cc182c636f7e3bfaf4ae3f160417"} pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:06:37 crc kubenswrapper[4914]: I0127 14:06:37.691698 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" containerID="cri-o://d7c21b1cd9cda80b642f46a096fe84b98a11cc182c636f7e3bfaf4ae3f160417" gracePeriod=600 Jan 27 14:06:38 crc kubenswrapper[4914]: I0127 14:06:38.700581 4914 generic.go:334] "Generic (PLEG): container finished" podID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerID="d7c21b1cd9cda80b642f46a096fe84b98a11cc182c636f7e3bfaf4ae3f160417" exitCode=0 Jan 27 14:06:38 crc kubenswrapper[4914]: I0127 14:06:38.700634 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerDied","Data":"d7c21b1cd9cda80b642f46a096fe84b98a11cc182c636f7e3bfaf4ae3f160417"} Jan 27 14:06:38 crc kubenswrapper[4914]: I0127 14:06:38.700673 4914 scope.go:117] "RemoveContainer" containerID="1eaab6549a5f3b2138f4d755eedcddc2ad9f911aba3a749ef9e6dd2fe3f38be3" Jan 27 14:06:40 crc kubenswrapper[4914]: I0127 14:06:40.273302 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-845f6ddb76-569qx" podUID="20138273-cae3-4cc1-960f-f861eca72126" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.163:9311/healthcheck\": dial tcp 10.217.0.163:9311: connect: connection refused" Jan 27 14:06:40 crc kubenswrapper[4914]: I0127 14:06:40.273350 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-845f6ddb76-569qx" podUID="20138273-cae3-4cc1-960f-f861eca72126" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.163:9311/healthcheck\": dial tcp 10.217.0.163:9311: connect: connection refused" Jan 27 14:06:40 crc kubenswrapper[4914]: I0127 14:06:40.748693 4914 generic.go:334] "Generic (PLEG): container finished" podID="20138273-cae3-4cc1-960f-f861eca72126" containerID="4f8600ee4babbcb896697bc14567b430422304335b270b1b132a10c2333c2e66" exitCode=0 Jan 27 14:06:40 crc kubenswrapper[4914]: I0127 14:06:40.748740 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-845f6ddb76-569qx" event={"ID":"20138273-cae3-4cc1-960f-f861eca72126","Type":"ContainerDied","Data":"4f8600ee4babbcb896697bc14567b430422304335b270b1b132a10c2333c2e66"} Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.720293 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5fb5dcf6b9-q5gfl"] Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.722192 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.738116 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5fb5dcf6b9-q5gfl"] Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.842312 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-public-tls-certs\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.842374 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-log-httpd\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.842403 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-config-data\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.842692 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-etc-swift\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.842822 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgv79\" (UniqueName: \"kubernetes.io/projected/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-kube-api-access-lgv79\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.842935 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-run-httpd\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.843021 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-internal-tls-certs\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.843235 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-combined-ca-bundle\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.944901 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-etc-swift\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.944986 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgv79\" (UniqueName: \"kubernetes.io/projected/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-kube-api-access-lgv79\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.945004 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-run-httpd\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.945027 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-internal-tls-certs\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.945078 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-combined-ca-bundle\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.945120 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-public-tls-certs\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.945154 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-log-httpd\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.945176 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-config-data\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.945635 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-run-httpd\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.946544 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-log-httpd\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.951220 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-public-tls-certs\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.951845 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-config-data\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.951855 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-combined-ca-bundle\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.951908 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-etc-swift\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.953344 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-internal-tls-certs\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:41 crc kubenswrapper[4914]: I0127 14:06:41.965721 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgv79\" (UniqueName: \"kubernetes.io/projected/7a7d0c59-fb20-4508-bfe5-5e91e2f28394-kube-api-access-lgv79\") pod \"swift-proxy-5fb5dcf6b9-q5gfl\" (UID: \"7a7d0c59-fb20-4508-bfe5-5e91e2f28394\") " pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:42 crc kubenswrapper[4914]: I0127 14:06:42.051501 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:42 crc kubenswrapper[4914]: I0127 14:06:42.809213 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b88564dfc-pk2d6" event={"ID":"87cff560-6d78-4257-b80f-16e6172fc629","Type":"ContainerDied","Data":"0a8b590f184799ed897617013f9ff7e96efc2a0b5efefbc376a1e6a2cc257cc6"} Jan 27 14:06:42 crc kubenswrapper[4914]: I0127 14:06:42.809470 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a8b590f184799ed897617013f9ff7e96efc2a0b5efefbc376a1e6a2cc257cc6" Jan 27 14:06:42 crc kubenswrapper[4914]: E0127 14:06:42.827082 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:22f097cb86b28ac48dc670ed7e0e841280bef1608f11b2b4536fbc2d2a6a90be" Jan 27 14:06:42 crc kubenswrapper[4914]: E0127 14:06:42.827304 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:22f097cb86b28ac48dc670ed7e0e841280bef1608f11b2b4536fbc2d2a6a90be,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65qxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-xbz4b_openstack(58d81784-ad81-47ce-befb-d2ec09617b1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:06:42 crc kubenswrapper[4914]: E0127 14:06:42.828397 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-xbz4b" podUID="58d81784-ad81-47ce-befb-d2ec09617b1c" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.083154 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.169393 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrssb\" (UniqueName: \"kubernetes.io/projected/87cff560-6d78-4257-b80f-16e6172fc629-kube-api-access-mrssb\") pod \"87cff560-6d78-4257-b80f-16e6172fc629\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.171487 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-config-data-custom\") pod \"87cff560-6d78-4257-b80f-16e6172fc629\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.171738 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-config-data\") pod \"87cff560-6d78-4257-b80f-16e6172fc629\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.171994 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87cff560-6d78-4257-b80f-16e6172fc629-logs\") pod \"87cff560-6d78-4257-b80f-16e6172fc629\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.172226 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-combined-ca-bundle\") pod \"87cff560-6d78-4257-b80f-16e6172fc629\" (UID: \"87cff560-6d78-4257-b80f-16e6172fc629\") " Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.172890 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87cff560-6d78-4257-b80f-16e6172fc629-logs" (OuterVolumeSpecName: "logs") pod "87cff560-6d78-4257-b80f-16e6172fc629" (UID: "87cff560-6d78-4257-b80f-16e6172fc629"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.173536 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87cff560-6d78-4257-b80f-16e6172fc629-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.191564 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "87cff560-6d78-4257-b80f-16e6172fc629" (UID: "87cff560-6d78-4257-b80f-16e6172fc629"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.198227 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cff560-6d78-4257-b80f-16e6172fc629-kube-api-access-mrssb" (OuterVolumeSpecName: "kube-api-access-mrssb") pod "87cff560-6d78-4257-b80f-16e6172fc629" (UID: "87cff560-6d78-4257-b80f-16e6172fc629"). InnerVolumeSpecName "kube-api-access-mrssb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.215990 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87cff560-6d78-4257-b80f-16e6172fc629" (UID: "87cff560-6d78-4257-b80f-16e6172fc629"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.276878 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.276913 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrssb\" (UniqueName: \"kubernetes.io/projected/87cff560-6d78-4257-b80f-16e6172fc629-kube-api-access-mrssb\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.276927 4914 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.277548 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-config-data" (OuterVolumeSpecName: "config-data") pod "87cff560-6d78-4257-b80f-16e6172fc629" (UID: "87cff560-6d78-4257-b80f-16e6172fc629"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.352580 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.378306 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87cff560-6d78-4257-b80f-16e6172fc629-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:43 crc kubenswrapper[4914]: W0127 14:06:43.412753 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56fee704_d63c_4264_a135_38cb14dca70f.slice/crio-483a1b4d2769243d2b6ccb36e2975f1157e7e58f656649f323e033e5a145edf8 WatchSource:0}: Error finding container 483a1b4d2769243d2b6ccb36e2975f1157e7e58f656649f323e033e5a145edf8: Status 404 returned error can't find the container with id 483a1b4d2769243d2b6ccb36e2975f1157e7e58f656649f323e033e5a145edf8 Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.415886 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6d88d8dfdb-6svwq"] Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.480062 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxsv7\" (UniqueName: \"kubernetes.io/projected/20138273-cae3-4cc1-960f-f861eca72126-kube-api-access-sxsv7\") pod \"20138273-cae3-4cc1-960f-f861eca72126\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.480122 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-config-data\") pod \"20138273-cae3-4cc1-960f-f861eca72126\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.480224 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-combined-ca-bundle\") pod \"20138273-cae3-4cc1-960f-f861eca72126\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.480266 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-internal-tls-certs\") pod \"20138273-cae3-4cc1-960f-f861eca72126\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.480409 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20138273-cae3-4cc1-960f-f861eca72126-logs\") pod \"20138273-cae3-4cc1-960f-f861eca72126\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.480463 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-config-data-custom\") pod \"20138273-cae3-4cc1-960f-f861eca72126\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.480539 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-public-tls-certs\") pod \"20138273-cae3-4cc1-960f-f861eca72126\" (UID: \"20138273-cae3-4cc1-960f-f861eca72126\") " Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.485891 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20138273-cae3-4cc1-960f-f861eca72126-logs" (OuterVolumeSpecName: "logs") pod "20138273-cae3-4cc1-960f-f861eca72126" (UID: "20138273-cae3-4cc1-960f-f861eca72126"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.487330 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20138273-cae3-4cc1-960f-f861eca72126-kube-api-access-sxsv7" (OuterVolumeSpecName: "kube-api-access-sxsv7") pod "20138273-cae3-4cc1-960f-f861eca72126" (UID: "20138273-cae3-4cc1-960f-f861eca72126"). InnerVolumeSpecName "kube-api-access-sxsv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.487463 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "20138273-cae3-4cc1-960f-f861eca72126" (UID: "20138273-cae3-4cc1-960f-f861eca72126"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.570333 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20138273-cae3-4cc1-960f-f861eca72126" (UID: "20138273-cae3-4cc1-960f-f861eca72126"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.583561 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20138273-cae3-4cc1-960f-f861eca72126-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.583606 4914 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.583622 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxsv7\" (UniqueName: \"kubernetes.io/projected/20138273-cae3-4cc1-960f-f861eca72126-kube-api-access-sxsv7\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:43 crc kubenswrapper[4914]: I0127 14:06:43.583631 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.285030 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "20138273-cae3-4cc1-960f-f861eca72126" (UID: "20138273-cae3-4cc1-960f-f861eca72126"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.298505 4914 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.301622 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-845f6ddb76-569qx" Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.328327 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "20138273-cae3-4cc1-960f-f861eca72126" (UID: "20138273-cae3-4cc1-960f-f861eca72126"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.347964 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b88564dfc-pk2d6" Jan 27 14:06:44 crc kubenswrapper[4914]: E0127 14:06:44.388941 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:22f097cb86b28ac48dc670ed7e0e841280bef1608f11b2b4536fbc2d2a6a90be\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-xbz4b" podUID="58d81784-ad81-47ce-befb-d2ec09617b1c" Jan 27 14:06:44 crc kubenswrapper[4914]: W0127 14:06:44.389067 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a7d0c59_fb20_4508_bfe5_5e91e2f28394.slice/crio-5206f52f20cc0d0730f0a5133391d90d1c89c73a808d04ae5440651cb55da597 WatchSource:0}: Error finding container 5206f52f20cc0d0730f0a5133391d90d1c89c73a808d04ae5440651cb55da597: Status 404 returned error can't find the container with id 5206f52f20cc0d0730f0a5133391d90d1c89c73a808d04ae5440651cb55da597 Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.400525 4914 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.429632 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-config-data" (OuterVolumeSpecName: "config-data") pod "20138273-cae3-4cc1-960f-f861eca72126" (UID: "20138273-cae3-4cc1-960f-f861eca72126"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.502394 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20138273-cae3-4cc1-960f-f861eca72126-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.514751 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d88d8dfdb-6svwq" event={"ID":"56fee704-d63c-4264-a135-38cb14dca70f","Type":"ContainerStarted","Data":"483a1b4d2769243d2b6ccb36e2975f1157e7e58f656649f323e033e5a145edf8"} Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.514816 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-845f6ddb76-569qx" event={"ID":"20138273-cae3-4cc1-960f-f861eca72126","Type":"ContainerDied","Data":"e7cd4b3d595b109430e24a80be64d22e285ee5a5b35702e49207ebb1ae2ee3bf"} Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.514872 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerStarted","Data":"a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771"} Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.514891 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5fb5dcf6b9-q5gfl"] Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.514961 4914 scope.go:117] "RemoveContainer" containerID="4f8600ee4babbcb896697bc14567b430422304335b270b1b132a10c2333c2e66" Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.550194 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5b88564dfc-pk2d6"] Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.563343 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5b88564dfc-pk2d6"] Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.578543 4914 scope.go:117] "RemoveContainer" containerID="d66e8fa0cb9b6228361d9ce732e443af3fc91ee40165c73befc436dc06b20719" Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.735551 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-845f6ddb76-569qx"] Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.745703 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-845f6ddb76-569qx"] Jan 27 14:06:44 crc kubenswrapper[4914]: I0127 14:06:44.971612 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.129754 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3cfa4b-174e-4958-8bd6-560f1f990c68-log-httpd\") pod \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.130084 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-sg-core-conf-yaml\") pod \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.130138 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-combined-ca-bundle\") pod \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.130209 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-config-data\") pod \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.130241 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3cfa4b-174e-4958-8bd6-560f1f990c68-run-httpd\") pod \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.130329 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-scripts\") pod \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.130361 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcd5b\" (UniqueName: \"kubernetes.io/projected/aa3cfa4b-174e-4958-8bd6-560f1f990c68-kube-api-access-zcd5b\") pod \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\" (UID: \"aa3cfa4b-174e-4958-8bd6-560f1f990c68\") " Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.130543 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa3cfa4b-174e-4958-8bd6-560f1f990c68-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aa3cfa4b-174e-4958-8bd6-560f1f990c68" (UID: "aa3cfa4b-174e-4958-8bd6-560f1f990c68"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.130566 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa3cfa4b-174e-4958-8bd6-560f1f990c68-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aa3cfa4b-174e-4958-8bd6-560f1f990c68" (UID: "aa3cfa4b-174e-4958-8bd6-560f1f990c68"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.131217 4914 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3cfa4b-174e-4958-8bd6-560f1f990c68-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.131237 4914 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3cfa4b-174e-4958-8bd6-560f1f990c68-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.136103 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-scripts" (OuterVolumeSpecName: "scripts") pod "aa3cfa4b-174e-4958-8bd6-560f1f990c68" (UID: "aa3cfa4b-174e-4958-8bd6-560f1f990c68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.137148 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa3cfa4b-174e-4958-8bd6-560f1f990c68-kube-api-access-zcd5b" (OuterVolumeSpecName: "kube-api-access-zcd5b") pod "aa3cfa4b-174e-4958-8bd6-560f1f990c68" (UID: "aa3cfa4b-174e-4958-8bd6-560f1f990c68"). InnerVolumeSpecName "kube-api-access-zcd5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.162474 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aa3cfa4b-174e-4958-8bd6-560f1f990c68" (UID: "aa3cfa4b-174e-4958-8bd6-560f1f990c68"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.230302 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa3cfa4b-174e-4958-8bd6-560f1f990c68" (UID: "aa3cfa4b-174e-4958-8bd6-560f1f990c68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.233304 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.233338 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcd5b\" (UniqueName: \"kubernetes.io/projected/aa3cfa4b-174e-4958-8bd6-560f1f990c68-kube-api-access-zcd5b\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.233355 4914 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.233366 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.239330 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-config-data" (OuterVolumeSpecName: "config-data") pod "aa3cfa4b-174e-4958-8bd6-560f1f990c68" (UID: "aa3cfa4b-174e-4958-8bd6-560f1f990c68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.336289 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa3cfa4b-174e-4958-8bd6-560f1f990c68-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.360576 4914 generic.go:334] "Generic (PLEG): container finished" podID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerID="d4c127142b1c3dcc24ccbae5c830289723bb4091bb8bc9ad3c687d551e2957cc" exitCode=0 Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.360671 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3cfa4b-174e-4958-8bd6-560f1f990c68","Type":"ContainerDied","Data":"d4c127142b1c3dcc24ccbae5c830289723bb4091bb8bc9ad3c687d551e2957cc"} Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.360702 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3cfa4b-174e-4958-8bd6-560f1f990c68","Type":"ContainerDied","Data":"d4dd9615570f7ee70142aa7d7ddc40ddd42cffab8c1bd372495b682cfd2d0241"} Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.360720 4914 scope.go:117] "RemoveContainer" containerID="504e00eeef20e7d7416750d336fa826d2664da22457b5b2f851e6f1d71be361b" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.360823 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.363800 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d88d8dfdb-6svwq" event={"ID":"56fee704-d63c-4264-a135-38cb14dca70f","Type":"ContainerStarted","Data":"ecfb85d0761b28a349c3479f9f8a5bbc53e1722f03f052416fa9da4543eeb948"} Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.363885 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d88d8dfdb-6svwq" event={"ID":"56fee704-d63c-4264-a135-38cb14dca70f","Type":"ContainerStarted","Data":"d0e47257df232d020866fbcd137bbb0e5285c06daef72351487b3f08a4ccac4f"} Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.365377 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.365465 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.377920 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" event={"ID":"7a7d0c59-fb20-4508-bfe5-5e91e2f28394","Type":"ContainerStarted","Data":"f87ab75e598ca4305f19483e5e24df9bf6abe883c1271bca2ad0f20500c4d747"} Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.377995 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" event={"ID":"7a7d0c59-fb20-4508-bfe5-5e91e2f28394","Type":"ContainerStarted","Data":"28d046a84f1459eca724facd1f5a3a397670be0867e479aed671b0d8f615021e"} Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.378006 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" event={"ID":"7a7d0c59-fb20-4508-bfe5-5e91e2f28394","Type":"ContainerStarted","Data":"5206f52f20cc0d0730f0a5133391d90d1c89c73a808d04ae5440651cb55da597"} Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.380198 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.380422 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.383632 4914 scope.go:117] "RemoveContainer" containerID="835033792ffcb529a295ad0bf0c31ed4f5870992c6ebe003f06c1532e98ee033" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.401975 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6d88d8dfdb-6svwq" podStartSLOduration=12.401948852 podStartE2EDuration="12.401948852s" podCreationTimestamp="2026-01-27 14:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:45.395210808 +0000 UTC m=+1363.707560893" watchObservedRunningTime="2026-01-27 14:06:45.401948852 +0000 UTC m=+1363.714298937" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.422467 4914 scope.go:117] "RemoveContainer" containerID="c8077a6806c4f412de26de667031b029ede7bb85f8579b330ac54a9c2cf231a0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.432326 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.457469 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.460320 4914 scope.go:117] "RemoveContainer" containerID="d4c127142b1c3dcc24ccbae5c830289723bb4091bb8bc9ad3c687d551e2957cc" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.491618 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:45 crc kubenswrapper[4914]: E0127 14:06:45.492302 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20138273-cae3-4cc1-960f-f861eca72126" containerName="barbican-api-log" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.492328 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="20138273-cae3-4cc1-960f-f861eca72126" containerName="barbican-api-log" Jan 27 14:06:45 crc kubenswrapper[4914]: E0127 14:06:45.492345 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerName="ceilometer-central-agent" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.492352 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerName="ceilometer-central-agent" Jan 27 14:06:45 crc kubenswrapper[4914]: E0127 14:06:45.492367 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerName="sg-core" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.492373 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerName="sg-core" Jan 27 14:06:45 crc kubenswrapper[4914]: E0127 14:06:45.492383 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87cff560-6d78-4257-b80f-16e6172fc629" containerName="barbican-worker-log" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.492390 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="87cff560-6d78-4257-b80f-16e6172fc629" containerName="barbican-worker-log" Jan 27 14:06:45 crc kubenswrapper[4914]: E0127 14:06:45.492403 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87cff560-6d78-4257-b80f-16e6172fc629" containerName="barbican-worker" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.492409 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="87cff560-6d78-4257-b80f-16e6172fc629" containerName="barbican-worker" Jan 27 14:06:45 crc kubenswrapper[4914]: E0127 14:06:45.492421 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20138273-cae3-4cc1-960f-f861eca72126" containerName="barbican-api" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.492427 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="20138273-cae3-4cc1-960f-f861eca72126" containerName="barbican-api" Jan 27 14:06:45 crc kubenswrapper[4914]: E0127 14:06:45.492437 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerName="ceilometer-notification-agent" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.492443 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerName="ceilometer-notification-agent" Jan 27 14:06:45 crc kubenswrapper[4914]: E0127 14:06:45.492453 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerName="proxy-httpd" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.492460 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerName="proxy-httpd" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.492620 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="87cff560-6d78-4257-b80f-16e6172fc629" containerName="barbican-worker-log" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.492677 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="87cff560-6d78-4257-b80f-16e6172fc629" containerName="barbican-worker" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.492686 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="20138273-cae3-4cc1-960f-f861eca72126" containerName="barbican-api" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.492698 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerName="proxy-httpd" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.492707 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="20138273-cae3-4cc1-960f-f861eca72126" containerName="barbican-api-log" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.492723 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerName="ceilometer-notification-agent" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.492733 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerName="ceilometer-central-agent" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.492740 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" containerName="sg-core" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.495031 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.498315 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.498616 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.504651 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" podStartSLOduration=4.504632791 podStartE2EDuration="4.504632791s" podCreationTimestamp="2026-01-27 14:06:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:45.460349845 +0000 UTC m=+1363.772699950" watchObservedRunningTime="2026-01-27 14:06:45.504632791 +0000 UTC m=+1363.816982876" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.506340 4914 scope.go:117] "RemoveContainer" containerID="504e00eeef20e7d7416750d336fa826d2664da22457b5b2f851e6f1d71be361b" Jan 27 14:06:45 crc kubenswrapper[4914]: E0127 14:06:45.507106 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"504e00eeef20e7d7416750d336fa826d2664da22457b5b2f851e6f1d71be361b\": container with ID starting with 504e00eeef20e7d7416750d336fa826d2664da22457b5b2f851e6f1d71be361b not found: ID does not exist" containerID="504e00eeef20e7d7416750d336fa826d2664da22457b5b2f851e6f1d71be361b" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.507155 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"504e00eeef20e7d7416750d336fa826d2664da22457b5b2f851e6f1d71be361b"} err="failed to get container status \"504e00eeef20e7d7416750d336fa826d2664da22457b5b2f851e6f1d71be361b\": rpc error: code = NotFound desc = could not find container \"504e00eeef20e7d7416750d336fa826d2664da22457b5b2f851e6f1d71be361b\": container with ID starting with 504e00eeef20e7d7416750d336fa826d2664da22457b5b2f851e6f1d71be361b not found: ID does not exist" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.507185 4914 scope.go:117] "RemoveContainer" containerID="835033792ffcb529a295ad0bf0c31ed4f5870992c6ebe003f06c1532e98ee033" Jan 27 14:06:45 crc kubenswrapper[4914]: E0127 14:06:45.510129 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"835033792ffcb529a295ad0bf0c31ed4f5870992c6ebe003f06c1532e98ee033\": container with ID starting with 835033792ffcb529a295ad0bf0c31ed4f5870992c6ebe003f06c1532e98ee033 not found: ID does not exist" containerID="835033792ffcb529a295ad0bf0c31ed4f5870992c6ebe003f06c1532e98ee033" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.510159 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"835033792ffcb529a295ad0bf0c31ed4f5870992c6ebe003f06c1532e98ee033"} err="failed to get container status \"835033792ffcb529a295ad0bf0c31ed4f5870992c6ebe003f06c1532e98ee033\": rpc error: code = NotFound desc = could not find container \"835033792ffcb529a295ad0bf0c31ed4f5870992c6ebe003f06c1532e98ee033\": container with ID starting with 835033792ffcb529a295ad0bf0c31ed4f5870992c6ebe003f06c1532e98ee033 not found: ID does not exist" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.510176 4914 scope.go:117] "RemoveContainer" containerID="c8077a6806c4f412de26de667031b029ede7bb85f8579b330ac54a9c2cf231a0" Jan 27 14:06:45 crc kubenswrapper[4914]: E0127 14:06:45.510433 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8077a6806c4f412de26de667031b029ede7bb85f8579b330ac54a9c2cf231a0\": container with ID starting with c8077a6806c4f412de26de667031b029ede7bb85f8579b330ac54a9c2cf231a0 not found: ID does not exist" containerID="c8077a6806c4f412de26de667031b029ede7bb85f8579b330ac54a9c2cf231a0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.510459 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8077a6806c4f412de26de667031b029ede7bb85f8579b330ac54a9c2cf231a0"} err="failed to get container status \"c8077a6806c4f412de26de667031b029ede7bb85f8579b330ac54a9c2cf231a0\": rpc error: code = NotFound desc = could not find container \"c8077a6806c4f412de26de667031b029ede7bb85f8579b330ac54a9c2cf231a0\": container with ID starting with c8077a6806c4f412de26de667031b029ede7bb85f8579b330ac54a9c2cf231a0 not found: ID does not exist" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.510479 4914 scope.go:117] "RemoveContainer" containerID="d4c127142b1c3dcc24ccbae5c830289723bb4091bb8bc9ad3c687d551e2957cc" Jan 27 14:06:45 crc kubenswrapper[4914]: E0127 14:06:45.510692 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4c127142b1c3dcc24ccbae5c830289723bb4091bb8bc9ad3c687d551e2957cc\": container with ID starting with d4c127142b1c3dcc24ccbae5c830289723bb4091bb8bc9ad3c687d551e2957cc not found: ID does not exist" containerID="d4c127142b1c3dcc24ccbae5c830289723bb4091bb8bc9ad3c687d551e2957cc" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.510718 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4c127142b1c3dcc24ccbae5c830289723bb4091bb8bc9ad3c687d551e2957cc"} err="failed to get container status \"d4c127142b1c3dcc24ccbae5c830289723bb4091bb8bc9ad3c687d551e2957cc\": rpc error: code = NotFound desc = could not find container \"d4c127142b1c3dcc24ccbae5c830289723bb4091bb8bc9ad3c687d551e2957cc\": container with ID starting with d4c127142b1c3dcc24ccbae5c830289723bb4091bb8bc9ad3c687d551e2957cc not found: ID does not exist" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.516898 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.642746 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-scripts\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.643124 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16d38f9-5614-4637-a98f-9c47190ccff4-log-httpd\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.643158 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.643211 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g2rz\" (UniqueName: \"kubernetes.io/projected/b16d38f9-5614-4637-a98f-9c47190ccff4-kube-api-access-9g2rz\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.643241 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.643419 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-config-data\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.643703 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16d38f9-5614-4637-a98f-9c47190ccff4-run-httpd\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.745282 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16d38f9-5614-4637-a98f-9c47190ccff4-run-httpd\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.745447 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-scripts\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.745495 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16d38f9-5614-4637-a98f-9c47190ccff4-log-httpd\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.745518 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.745566 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g2rz\" (UniqueName: \"kubernetes.io/projected/b16d38f9-5614-4637-a98f-9c47190ccff4-kube-api-access-9g2rz\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.745591 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.745619 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-config-data\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.747855 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16d38f9-5614-4637-a98f-9c47190ccff4-log-httpd\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.748332 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16d38f9-5614-4637-a98f-9c47190ccff4-run-httpd\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.755077 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-scripts\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.755989 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.760081 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.760813 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-config-data\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.780458 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g2rz\" (UniqueName: \"kubernetes.io/projected/b16d38f9-5614-4637-a98f-9c47190ccff4-kube-api-access-9g2rz\") pod \"ceilometer-0\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " pod="openstack/ceilometer-0" Jan 27 14:06:45 crc kubenswrapper[4914]: I0127 14:06:45.826446 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:06:46 crc kubenswrapper[4914]: I0127 14:06:46.322553 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20138273-cae3-4cc1-960f-f861eca72126" path="/var/lib/kubelet/pods/20138273-cae3-4cc1-960f-f861eca72126/volumes" Jan 27 14:06:46 crc kubenswrapper[4914]: I0127 14:06:46.324108 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cff560-6d78-4257-b80f-16e6172fc629" path="/var/lib/kubelet/pods/87cff560-6d78-4257-b80f-16e6172fc629/volumes" Jan 27 14:06:46 crc kubenswrapper[4914]: I0127 14:06:46.324914 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa3cfa4b-174e-4958-8bd6-560f1f990c68" path="/var/lib/kubelet/pods/aa3cfa4b-174e-4958-8bd6-560f1f990c68/volumes" Jan 27 14:06:46 crc kubenswrapper[4914]: I0127 14:06:46.332146 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:06:46 crc kubenswrapper[4914]: I0127 14:06:46.392429 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16d38f9-5614-4637-a98f-9c47190ccff4","Type":"ContainerStarted","Data":"7414e6be5338e286698b0fdf1254bb4bed0a5f7d349a0a1bf7197fd0862001ab"} Jan 27 14:06:48 crc kubenswrapper[4914]: I0127 14:06:48.412248 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16d38f9-5614-4637-a98f-9c47190ccff4","Type":"ContainerStarted","Data":"8671d160ed2b7d759efd880eab6e024b910a3314b90142ccf06949d0ac922e63"} Jan 27 14:06:50 crc kubenswrapper[4914]: I0127 14:06:50.445329 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16d38f9-5614-4637-a98f-9c47190ccff4","Type":"ContainerStarted","Data":"2bde8d36ec54c063fb58db8839a0deb3f66be93a6fe1d2555cf4f5eba33e84b3"} Jan 27 14:06:51 crc kubenswrapper[4914]: I0127 14:06:51.459462 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16d38f9-5614-4637-a98f-9c47190ccff4","Type":"ContainerStarted","Data":"fa8842a92a6f55830ed93efc47ca2b83186c99f714cfbb125e4a239dc202dcef"} Jan 27 14:06:52 crc kubenswrapper[4914]: I0127 14:06:52.056851 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:52 crc kubenswrapper[4914]: I0127 14:06:52.058745 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5fb5dcf6b9-q5gfl" Jan 27 14:06:52 crc kubenswrapper[4914]: I0127 14:06:52.154926 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6786c8f89-752mb"] Jan 27 14:06:52 crc kubenswrapper[4914]: I0127 14:06:52.155178 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6786c8f89-752mb" podUID="d4a1896c-aac3-4c71-8d04-e608cc34f5f6" containerName="proxy-httpd" containerID="cri-o://a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3" gracePeriod=30 Jan 27 14:06:52 crc kubenswrapper[4914]: I0127 14:06:52.155312 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6786c8f89-752mb" podUID="d4a1896c-aac3-4c71-8d04-e608cc34f5f6" containerName="proxy-server" containerID="cri-o://41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f" gracePeriod=30 Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.357073 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.486042 4914 generic.go:334] "Generic (PLEG): container finished" podID="d4a1896c-aac3-4c71-8d04-e608cc34f5f6" containerID="41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f" exitCode=0 Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.486080 4914 generic.go:334] "Generic (PLEG): container finished" podID="d4a1896c-aac3-4c71-8d04-e608cc34f5f6" containerID="a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3" exitCode=0 Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.486116 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6786c8f89-752mb" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.486155 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6786c8f89-752mb" event={"ID":"d4a1896c-aac3-4c71-8d04-e608cc34f5f6","Type":"ContainerDied","Data":"41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f"} Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.486218 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6786c8f89-752mb" event={"ID":"d4a1896c-aac3-4c71-8d04-e608cc34f5f6","Type":"ContainerDied","Data":"a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3"} Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.486231 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6786c8f89-752mb" event={"ID":"d4a1896c-aac3-4c71-8d04-e608cc34f5f6","Type":"ContainerDied","Data":"42171d79af5d2240c055183be627aebb29edf70f3ef54a21688fb3f838a934ce"} Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.486248 4914 scope.go:117] "RemoveContainer" containerID="41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.493607 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16d38f9-5614-4637-a98f-9c47190ccff4","Type":"ContainerStarted","Data":"4a517bb0cb845a939cf1f2f21c49f3e50ba43c0141c80c2ad66e583e11cad3ae"} Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.494072 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.494873 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh8dc\" (UniqueName: \"kubernetes.io/projected/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-kube-api-access-rh8dc\") pod \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.494929 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-config-data\") pod \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.494993 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-etc-swift\") pod \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.495022 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-combined-ca-bundle\") pod \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.495106 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-public-tls-certs\") pod \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.495593 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-log-httpd\") pod \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.495687 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-run-httpd\") pod \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.495747 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-internal-tls-certs\") pod \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\" (UID: \"d4a1896c-aac3-4c71-8d04-e608cc34f5f6\") " Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.498123 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d4a1896c-aac3-4c71-8d04-e608cc34f5f6" (UID: "d4a1896c-aac3-4c71-8d04-e608cc34f5f6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.498143 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d4a1896c-aac3-4c71-8d04-e608cc34f5f6" (UID: "d4a1896c-aac3-4c71-8d04-e608cc34f5f6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.504514 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d4a1896c-aac3-4c71-8d04-e608cc34f5f6" (UID: "d4a1896c-aac3-4c71-8d04-e608cc34f5f6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.504553 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-kube-api-access-rh8dc" (OuterVolumeSpecName: "kube-api-access-rh8dc") pod "d4a1896c-aac3-4c71-8d04-e608cc34f5f6" (UID: "d4a1896c-aac3-4c71-8d04-e608cc34f5f6"). InnerVolumeSpecName "kube-api-access-rh8dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.530226 4914 scope.go:117] "RemoveContainer" containerID="a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.531550 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.028462839 podStartE2EDuration="8.53152074s" podCreationTimestamp="2026-01-27 14:06:45 +0000 UTC" firstStartedPulling="2026-01-27 14:06:46.358526164 +0000 UTC m=+1364.670876259" lastFinishedPulling="2026-01-27 14:06:52.861584075 +0000 UTC m=+1371.173934160" observedRunningTime="2026-01-27 14:06:53.516368625 +0000 UTC m=+1371.828718740" watchObservedRunningTime="2026-01-27 14:06:53.53152074 +0000 UTC m=+1371.843870855" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.568464 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d4a1896c-aac3-4c71-8d04-e608cc34f5f6" (UID: "d4a1896c-aac3-4c71-8d04-e608cc34f5f6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.583787 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d4a1896c-aac3-4c71-8d04-e608cc34f5f6" (UID: "d4a1896c-aac3-4c71-8d04-e608cc34f5f6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.592956 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-config-data" (OuterVolumeSpecName: "config-data") pod "d4a1896c-aac3-4c71-8d04-e608cc34f5f6" (UID: "d4a1896c-aac3-4c71-8d04-e608cc34f5f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.598065 4914 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.598095 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh8dc\" (UniqueName: \"kubernetes.io/projected/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-kube-api-access-rh8dc\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.598104 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.598112 4914 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.598121 4914 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.598129 4914 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.598138 4914 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.600725 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4a1896c-aac3-4c71-8d04-e608cc34f5f6" (UID: "d4a1896c-aac3-4c71-8d04-e608cc34f5f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.679177 4914 scope.go:117] "RemoveContainer" containerID="41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f" Jan 27 14:06:53 crc kubenswrapper[4914]: E0127 14:06:53.679998 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f\": container with ID starting with 41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f not found: ID does not exist" containerID="41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.680035 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f"} err="failed to get container status \"41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f\": rpc error: code = NotFound desc = could not find container \"41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f\": container with ID starting with 41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f not found: ID does not exist" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.680065 4914 scope.go:117] "RemoveContainer" containerID="a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3" Jan 27 14:06:53 crc kubenswrapper[4914]: E0127 14:06:53.680317 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3\": container with ID starting with a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3 not found: ID does not exist" containerID="a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.680344 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3"} err="failed to get container status \"a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3\": rpc error: code = NotFound desc = could not find container \"a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3\": container with ID starting with a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3 not found: ID does not exist" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.680361 4914 scope.go:117] "RemoveContainer" containerID="41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.680592 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f"} err="failed to get container status \"41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f\": rpc error: code = NotFound desc = could not find container \"41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f\": container with ID starting with 41f4b0433c1d898a4fd59b6e24689b3199e47cf8568cda1afbb3f06a5de4c16f not found: ID does not exist" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.680612 4914 scope.go:117] "RemoveContainer" containerID="a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.680911 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3"} err="failed to get container status \"a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3\": rpc error: code = NotFound desc = could not find container \"a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3\": container with ID starting with a7acae61b4228a21ea3ec8047927eb098b8f1067e463b2bfb5adc337ad8ed8d3 not found: ID does not exist" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.700477 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a1896c-aac3-4c71-8d04-e608cc34f5f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.829184 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6786c8f89-752mb"] Jan 27 14:06:53 crc kubenswrapper[4914]: I0127 14:06:53.838323 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-6786c8f89-752mb"] Jan 27 14:06:54 crc kubenswrapper[4914]: I0127 14:06:54.307129 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4a1896c-aac3-4c71-8d04-e608cc34f5f6" path="/var/lib/kubelet/pods/d4a1896c-aac3-4c71-8d04-e608cc34f5f6/volumes" Jan 27 14:06:55 crc kubenswrapper[4914]: I0127 14:06:55.512862 4914 generic.go:334] "Generic (PLEG): container finished" podID="d3ca6012-ae4a-45ab-8975-9de943d2f790" containerID="22b6a69449be50b1aa84bf1899eff41cf65dd64c71bb204fe9b91007b9a45c53" exitCode=137 Jan 27 14:06:55 crc kubenswrapper[4914]: I0127 14:06:55.513146 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7898f695d7-6lw8w" event={"ID":"d3ca6012-ae4a-45ab-8975-9de943d2f790","Type":"ContainerDied","Data":"22b6a69449be50b1aa84bf1899eff41cf65dd64c71bb204fe9b91007b9a45c53"} Jan 27 14:06:55 crc kubenswrapper[4914]: I0127 14:06:55.515101 4914 generic.go:334] "Generic (PLEG): container finished" podID="416da471-329e-45f8-b786-e8841f575f20" containerID="bfd890dd9d32f4b7a29c9943572309e57caf863269314ecf44b18da8b8372e40" exitCode=137 Jan 27 14:06:55 crc kubenswrapper[4914]: I0127 14:06:55.515144 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" event={"ID":"416da471-329e-45f8-b786-e8841f575f20","Type":"ContainerDied","Data":"bfd890dd9d32f4b7a29c9943572309e57caf863269314ecf44b18da8b8372e40"} Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.423305 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.430124 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.526306 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" event={"ID":"416da471-329e-45f8-b786-e8841f575f20","Type":"ContainerDied","Data":"831bf0ea499789379def09205f1f3f215ce7753a3d2eded6f7c63b6aa9b98bdd"} Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.526330 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86dc4fcb7d-k54hq" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.526384 4914 scope.go:117] "RemoveContainer" containerID="bfd890dd9d32f4b7a29c9943572309e57caf863269314ecf44b18da8b8372e40" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.529517 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7898f695d7-6lw8w" event={"ID":"d3ca6012-ae4a-45ab-8975-9de943d2f790","Type":"ContainerDied","Data":"dc08cc567e42c5707091497a86548ebaa1b97c7e4091ddea781aa6d76b894d48"} Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.529588 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7898f695d7-6lw8w" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.560728 4914 scope.go:117] "RemoveContainer" containerID="6abc0f205dd4351533405721c80ece7917533d6d8e9a925eb52fc58ac06a033e" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.568992 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmbrv\" (UniqueName: \"kubernetes.io/projected/416da471-329e-45f8-b786-e8841f575f20-kube-api-access-fmbrv\") pod \"416da471-329e-45f8-b786-e8841f575f20\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.569049 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-config-data-custom\") pod \"d3ca6012-ae4a-45ab-8975-9de943d2f790\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.569099 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-combined-ca-bundle\") pod \"d3ca6012-ae4a-45ab-8975-9de943d2f790\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.569128 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-config-data\") pod \"d3ca6012-ae4a-45ab-8975-9de943d2f790\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.569171 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-config-data-custom\") pod \"416da471-329e-45f8-b786-e8841f575f20\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.569192 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-config-data\") pod \"416da471-329e-45f8-b786-e8841f575f20\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.569305 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-combined-ca-bundle\") pod \"416da471-329e-45f8-b786-e8841f575f20\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.569343 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd2hg\" (UniqueName: \"kubernetes.io/projected/d3ca6012-ae4a-45ab-8975-9de943d2f790-kube-api-access-fd2hg\") pod \"d3ca6012-ae4a-45ab-8975-9de943d2f790\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.569461 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/416da471-329e-45f8-b786-e8841f575f20-logs\") pod \"416da471-329e-45f8-b786-e8841f575f20\" (UID: \"416da471-329e-45f8-b786-e8841f575f20\") " Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.569487 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3ca6012-ae4a-45ab-8975-9de943d2f790-logs\") pod \"d3ca6012-ae4a-45ab-8975-9de943d2f790\" (UID: \"d3ca6012-ae4a-45ab-8975-9de943d2f790\") " Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.570141 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3ca6012-ae4a-45ab-8975-9de943d2f790-logs" (OuterVolumeSpecName: "logs") pod "d3ca6012-ae4a-45ab-8975-9de943d2f790" (UID: "d3ca6012-ae4a-45ab-8975-9de943d2f790"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.570177 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/416da471-329e-45f8-b786-e8841f575f20-logs" (OuterVolumeSpecName: "logs") pod "416da471-329e-45f8-b786-e8841f575f20" (UID: "416da471-329e-45f8-b786-e8841f575f20"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.574926 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/416da471-329e-45f8-b786-e8841f575f20-kube-api-access-fmbrv" (OuterVolumeSpecName: "kube-api-access-fmbrv") pod "416da471-329e-45f8-b786-e8841f575f20" (UID: "416da471-329e-45f8-b786-e8841f575f20"). InnerVolumeSpecName "kube-api-access-fmbrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.575437 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "416da471-329e-45f8-b786-e8841f575f20" (UID: "416da471-329e-45f8-b786-e8841f575f20"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.581199 4914 scope.go:117] "RemoveContainer" containerID="22b6a69449be50b1aa84bf1899eff41cf65dd64c71bb204fe9b91007b9a45c53" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.584526 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ca6012-ae4a-45ab-8975-9de943d2f790-kube-api-access-fd2hg" (OuterVolumeSpecName: "kube-api-access-fd2hg") pod "d3ca6012-ae4a-45ab-8975-9de943d2f790" (UID: "d3ca6012-ae4a-45ab-8975-9de943d2f790"). InnerVolumeSpecName "kube-api-access-fd2hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.584685 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d3ca6012-ae4a-45ab-8975-9de943d2f790" (UID: "d3ca6012-ae4a-45ab-8975-9de943d2f790"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.604388 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "416da471-329e-45f8-b786-e8841f575f20" (UID: "416da471-329e-45f8-b786-e8841f575f20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.624369 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-config-data" (OuterVolumeSpecName: "config-data") pod "d3ca6012-ae4a-45ab-8975-9de943d2f790" (UID: "d3ca6012-ae4a-45ab-8975-9de943d2f790"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.627439 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3ca6012-ae4a-45ab-8975-9de943d2f790" (UID: "d3ca6012-ae4a-45ab-8975-9de943d2f790"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.644981 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-config-data" (OuterVolumeSpecName: "config-data") pod "416da471-329e-45f8-b786-e8841f575f20" (UID: "416da471-329e-45f8-b786-e8841f575f20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.671522 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.671549 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd2hg\" (UniqueName: \"kubernetes.io/projected/d3ca6012-ae4a-45ab-8975-9de943d2f790-kube-api-access-fd2hg\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.671562 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/416da471-329e-45f8-b786-e8841f575f20-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.671571 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3ca6012-ae4a-45ab-8975-9de943d2f790-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.671579 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmbrv\" (UniqueName: \"kubernetes.io/projected/416da471-329e-45f8-b786-e8841f575f20-kube-api-access-fmbrv\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.671588 4914 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.671597 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.671605 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ca6012-ae4a-45ab-8975-9de943d2f790-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.671615 4914 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.671623 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416da471-329e-45f8-b786-e8841f575f20-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.745659 4914 scope.go:117] "RemoveContainer" containerID="d4f3c8f0c5f22d31ee34a5ee812905ecb1af58af9065e3c6f67bc9ace83c3e97" Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.872241 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-86dc4fcb7d-k54hq"] Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.883808 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-86dc4fcb7d-k54hq"] Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.891469 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7898f695d7-6lw8w"] Jan 27 14:06:56 crc kubenswrapper[4914]: I0127 14:06:56.901157 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-7898f695d7-6lw8w"] Jan 27 14:06:57 crc kubenswrapper[4914]: I0127 14:06:57.536870 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6669b7ffb9-n8php" Jan 27 14:06:57 crc kubenswrapper[4914]: I0127 14:06:57.646582 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64bb7f895-7ftxk"] Jan 27 14:06:57 crc kubenswrapper[4914]: I0127 14:06:57.647224 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64bb7f895-7ftxk" podUID="41af7c55-6669-403f-b4cd-e62be1cd1db7" containerName="neutron-api" containerID="cri-o://aca192b82e5b342a6db1a51463adf47549942e55078ccb87c3e0dcf8b9b6c353" gracePeriod=30 Jan 27 14:06:57 crc kubenswrapper[4914]: I0127 14:06:57.647555 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64bb7f895-7ftxk" podUID="41af7c55-6669-403f-b4cd-e62be1cd1db7" containerName="neutron-httpd" containerID="cri-o://570edc130495a01cac24482f6ca01a7b3224c9f3c7d3637976c3592d6da75559" gracePeriod=30 Jan 27 14:06:58 crc kubenswrapper[4914]: I0127 14:06:58.307390 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="416da471-329e-45f8-b786-e8841f575f20" path="/var/lib/kubelet/pods/416da471-329e-45f8-b786-e8841f575f20/volumes" Jan 27 14:06:58 crc kubenswrapper[4914]: I0127 14:06:58.308696 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3ca6012-ae4a-45ab-8975-9de943d2f790" path="/var/lib/kubelet/pods/d3ca6012-ae4a-45ab-8975-9de943d2f790/volumes" Jan 27 14:06:58 crc kubenswrapper[4914]: I0127 14:06:58.553532 4914 generic.go:334] "Generic (PLEG): container finished" podID="41af7c55-6669-403f-b4cd-e62be1cd1db7" containerID="570edc130495a01cac24482f6ca01a7b3224c9f3c7d3637976c3592d6da75559" exitCode=0 Jan 27 14:06:58 crc kubenswrapper[4914]: I0127 14:06:58.553580 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bb7f895-7ftxk" event={"ID":"41af7c55-6669-403f-b4cd-e62be1cd1db7","Type":"ContainerDied","Data":"570edc130495a01cac24482f6ca01a7b3224c9f3c7d3637976c3592d6da75559"} Jan 27 14:06:59 crc kubenswrapper[4914]: I0127 14:06:59.564414 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xbz4b" event={"ID":"58d81784-ad81-47ce-befb-d2ec09617b1c","Type":"ContainerStarted","Data":"d3a53c20290ae91c6599b263904d53c65f4ebaa727da8689e26c279ecee27a54"} Jan 27 14:06:59 crc kubenswrapper[4914]: I0127 14:06:59.594998 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-xbz4b" podStartSLOduration=1.873833917 podStartE2EDuration="37.594974668s" podCreationTimestamp="2026-01-27 14:06:22 +0000 UTC" firstStartedPulling="2026-01-27 14:06:23.375710067 +0000 UTC m=+1341.688060152" lastFinishedPulling="2026-01-27 14:06:59.096850828 +0000 UTC m=+1377.409200903" observedRunningTime="2026-01-27 14:06:59.58925008 +0000 UTC m=+1377.901600195" watchObservedRunningTime="2026-01-27 14:06:59.594974668 +0000 UTC m=+1377.907324753" Jan 27 14:07:00 crc kubenswrapper[4914]: I0127 14:07:00.038700 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-64bb7f895-7ftxk" podUID="41af7c55-6669-403f-b4cd-e62be1cd1db7" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.166:9696/\": dial tcp 10.217.0.166:9696: connect: connection refused" Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.616650 4914 generic.go:334] "Generic (PLEG): container finished" podID="41af7c55-6669-403f-b4cd-e62be1cd1db7" containerID="aca192b82e5b342a6db1a51463adf47549942e55078ccb87c3e0dcf8b9b6c353" exitCode=0 Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.617249 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bb7f895-7ftxk" event={"ID":"41af7c55-6669-403f-b4cd-e62be1cd1db7","Type":"ContainerDied","Data":"aca192b82e5b342a6db1a51463adf47549942e55078ccb87c3e0dcf8b9b6c353"} Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.713105 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.868068 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-config\") pod \"41af7c55-6669-403f-b4cd-e62be1cd1db7\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.868386 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5hdx\" (UniqueName: \"kubernetes.io/projected/41af7c55-6669-403f-b4cd-e62be1cd1db7-kube-api-access-n5hdx\") pod \"41af7c55-6669-403f-b4cd-e62be1cd1db7\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.868563 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-ovndb-tls-certs\") pod \"41af7c55-6669-403f-b4cd-e62be1cd1db7\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.868670 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-httpd-config\") pod \"41af7c55-6669-403f-b4cd-e62be1cd1db7\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.868792 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-public-tls-certs\") pod \"41af7c55-6669-403f-b4cd-e62be1cd1db7\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.868958 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-internal-tls-certs\") pod \"41af7c55-6669-403f-b4cd-e62be1cd1db7\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.869308 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-combined-ca-bundle\") pod \"41af7c55-6669-403f-b4cd-e62be1cd1db7\" (UID: \"41af7c55-6669-403f-b4cd-e62be1cd1db7\") " Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.876622 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41af7c55-6669-403f-b4cd-e62be1cd1db7-kube-api-access-n5hdx" (OuterVolumeSpecName: "kube-api-access-n5hdx") pod "41af7c55-6669-403f-b4cd-e62be1cd1db7" (UID: "41af7c55-6669-403f-b4cd-e62be1cd1db7"). InnerVolumeSpecName "kube-api-access-n5hdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.877117 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "41af7c55-6669-403f-b4cd-e62be1cd1db7" (UID: "41af7c55-6669-403f-b4cd-e62be1cd1db7"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.930686 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "41af7c55-6669-403f-b4cd-e62be1cd1db7" (UID: "41af7c55-6669-403f-b4cd-e62be1cd1db7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.931989 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "41af7c55-6669-403f-b4cd-e62be1cd1db7" (UID: "41af7c55-6669-403f-b4cd-e62be1cd1db7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.954904 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41af7c55-6669-403f-b4cd-e62be1cd1db7" (UID: "41af7c55-6669-403f-b4cd-e62be1cd1db7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.955818 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-config" (OuterVolumeSpecName: "config") pod "41af7c55-6669-403f-b4cd-e62be1cd1db7" (UID: "41af7c55-6669-403f-b4cd-e62be1cd1db7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.959503 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "41af7c55-6669-403f-b4cd-e62be1cd1db7" (UID: "41af7c55-6669-403f-b4cd-e62be1cd1db7"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.971714 4914 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.971753 4914 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.971762 4914 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.971772 4914 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.971781 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.971790 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/41af7c55-6669-403f-b4cd-e62be1cd1db7-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:03 crc kubenswrapper[4914]: I0127 14:07:03.971799 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5hdx\" (UniqueName: \"kubernetes.io/projected/41af7c55-6669-403f-b4cd-e62be1cd1db7-kube-api-access-n5hdx\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:04 crc kubenswrapper[4914]: I0127 14:07:04.561863 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:07:04 crc kubenswrapper[4914]: I0127 14:07:04.562865 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6d88d8dfdb-6svwq" Jan 27 14:07:04 crc kubenswrapper[4914]: I0127 14:07:04.632562 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64bb7f895-7ftxk" Jan 27 14:07:04 crc kubenswrapper[4914]: I0127 14:07:04.633112 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bb7f895-7ftxk" event={"ID":"41af7c55-6669-403f-b4cd-e62be1cd1db7","Type":"ContainerDied","Data":"4900f5319fcdc6e9760f9a359594ff584f22e880ac5761dd1dfbe01c9146e1f4"} Jan 27 14:07:04 crc kubenswrapper[4914]: I0127 14:07:04.633146 4914 scope.go:117] "RemoveContainer" containerID="570edc130495a01cac24482f6ca01a7b3224c9f3c7d3637976c3592d6da75559" Jan 27 14:07:04 crc kubenswrapper[4914]: I0127 14:07:04.667738 4914 scope.go:117] "RemoveContainer" containerID="aca192b82e5b342a6db1a51463adf47549942e55078ccb87c3e0dcf8b9b6c353" Jan 27 14:07:04 crc kubenswrapper[4914]: I0127 14:07:04.668653 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-66649fdc7d-bbgtq"] Jan 27 14:07:04 crc kubenswrapper[4914]: I0127 14:07:04.669095 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-66649fdc7d-bbgtq" podUID="87a4ea21-5742-4829-aa81-d62dc8f5f5e4" containerName="placement-log" containerID="cri-o://53195d022deb479b90e9aec14958fe34410a4cd991583080f554d41674f67fba" gracePeriod=30 Jan 27 14:07:04 crc kubenswrapper[4914]: I0127 14:07:04.669253 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-66649fdc7d-bbgtq" podUID="87a4ea21-5742-4829-aa81-d62dc8f5f5e4" containerName="placement-api" containerID="cri-o://0693abc604aca9f014fe8a04f902607e1280981f6825b260397c48b039c3a193" gracePeriod=30 Jan 27 14:07:04 crc kubenswrapper[4914]: I0127 14:07:04.693553 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64bb7f895-7ftxk"] Jan 27 14:07:04 crc kubenswrapper[4914]: I0127 14:07:04.716503 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-64bb7f895-7ftxk"] Jan 27 14:07:05 crc kubenswrapper[4914]: I0127 14:07:05.643934 4914 generic.go:334] "Generic (PLEG): container finished" podID="87a4ea21-5742-4829-aa81-d62dc8f5f5e4" containerID="53195d022deb479b90e9aec14958fe34410a4cd991583080f554d41674f67fba" exitCode=143 Jan 27 14:07:05 crc kubenswrapper[4914]: I0127 14:07:05.644141 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66649fdc7d-bbgtq" event={"ID":"87a4ea21-5742-4829-aa81-d62dc8f5f5e4","Type":"ContainerDied","Data":"53195d022deb479b90e9aec14958fe34410a4cd991583080f554d41674f67fba"} Jan 27 14:07:06 crc kubenswrapper[4914]: I0127 14:07:06.306101 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41af7c55-6669-403f-b4cd-e62be1cd1db7" path="/var/lib/kubelet/pods/41af7c55-6669-403f-b4cd-e62be1cd1db7/volumes" Jan 27 14:07:07 crc kubenswrapper[4914]: E0127 14:07:07.911506 4914 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87a4ea21_5742_4829_aa81_d62dc8f5f5e4.slice/crio-0693abc604aca9f014fe8a04f902607e1280981f6825b260397c48b039c3a193.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.254680 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.316770 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-combined-ca-bundle\") pod \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.316849 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-config-data\") pod \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.316916 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-public-tls-certs\") pod \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.316971 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-logs\") pod \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.317174 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-scripts\") pod \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.317516 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-logs" (OuterVolumeSpecName: "logs") pod "87a4ea21-5742-4829-aa81-d62dc8f5f5e4" (UID: "87a4ea21-5742-4829-aa81-d62dc8f5f5e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.317736 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-internal-tls-certs\") pod \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.317795 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9qnp\" (UniqueName: \"kubernetes.io/projected/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-kube-api-access-p9qnp\") pod \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\" (UID: \"87a4ea21-5742-4829-aa81-d62dc8f5f5e4\") " Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.318914 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.322339 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-scripts" (OuterVolumeSpecName: "scripts") pod "87a4ea21-5742-4829-aa81-d62dc8f5f5e4" (UID: "87a4ea21-5742-4829-aa81-d62dc8f5f5e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.324646 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-kube-api-access-p9qnp" (OuterVolumeSpecName: "kube-api-access-p9qnp") pod "87a4ea21-5742-4829-aa81-d62dc8f5f5e4" (UID: "87a4ea21-5742-4829-aa81-d62dc8f5f5e4"). InnerVolumeSpecName "kube-api-access-p9qnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.365081 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-config-data" (OuterVolumeSpecName: "config-data") pod "87a4ea21-5742-4829-aa81-d62dc8f5f5e4" (UID: "87a4ea21-5742-4829-aa81-d62dc8f5f5e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.367903 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87a4ea21-5742-4829-aa81-d62dc8f5f5e4" (UID: "87a4ea21-5742-4829-aa81-d62dc8f5f5e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.400341 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "87a4ea21-5742-4829-aa81-d62dc8f5f5e4" (UID: "87a4ea21-5742-4829-aa81-d62dc8f5f5e4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.415954 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "87a4ea21-5742-4829-aa81-d62dc8f5f5e4" (UID: "87a4ea21-5742-4829-aa81-d62dc8f5f5e4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.420367 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.420399 4914 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.420408 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9qnp\" (UniqueName: \"kubernetes.io/projected/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-kube-api-access-p9qnp\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.420417 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.420425 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.420433 4914 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a4ea21-5742-4829-aa81-d62dc8f5f5e4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.712149 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66649fdc7d-bbgtq" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.713750 4914 generic.go:334] "Generic (PLEG): container finished" podID="87a4ea21-5742-4829-aa81-d62dc8f5f5e4" containerID="0693abc604aca9f014fe8a04f902607e1280981f6825b260397c48b039c3a193" exitCode=0 Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.713816 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66649fdc7d-bbgtq" event={"ID":"87a4ea21-5742-4829-aa81-d62dc8f5f5e4","Type":"ContainerDied","Data":"0693abc604aca9f014fe8a04f902607e1280981f6825b260397c48b039c3a193"} Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.713869 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66649fdc7d-bbgtq" event={"ID":"87a4ea21-5742-4829-aa81-d62dc8f5f5e4","Type":"ContainerDied","Data":"1d21194b9e36141864e8420bbea6735d6fb5887e339b905f5c649535dcd02967"} Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.713902 4914 scope.go:117] "RemoveContainer" containerID="0693abc604aca9f014fe8a04f902607e1280981f6825b260397c48b039c3a193" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.751248 4914 scope.go:117] "RemoveContainer" containerID="53195d022deb479b90e9aec14958fe34410a4cd991583080f554d41674f67fba" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.766533 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-66649fdc7d-bbgtq"] Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.779449 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-66649fdc7d-bbgtq"] Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.780375 4914 scope.go:117] "RemoveContainer" containerID="0693abc604aca9f014fe8a04f902607e1280981f6825b260397c48b039c3a193" Jan 27 14:07:08 crc kubenswrapper[4914]: E0127 14:07:08.781119 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0693abc604aca9f014fe8a04f902607e1280981f6825b260397c48b039c3a193\": container with ID starting with 0693abc604aca9f014fe8a04f902607e1280981f6825b260397c48b039c3a193 not found: ID does not exist" containerID="0693abc604aca9f014fe8a04f902607e1280981f6825b260397c48b039c3a193" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.781155 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0693abc604aca9f014fe8a04f902607e1280981f6825b260397c48b039c3a193"} err="failed to get container status \"0693abc604aca9f014fe8a04f902607e1280981f6825b260397c48b039c3a193\": rpc error: code = NotFound desc = could not find container \"0693abc604aca9f014fe8a04f902607e1280981f6825b260397c48b039c3a193\": container with ID starting with 0693abc604aca9f014fe8a04f902607e1280981f6825b260397c48b039c3a193 not found: ID does not exist" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.781183 4914 scope.go:117] "RemoveContainer" containerID="53195d022deb479b90e9aec14958fe34410a4cd991583080f554d41674f67fba" Jan 27 14:07:08 crc kubenswrapper[4914]: E0127 14:07:08.781869 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53195d022deb479b90e9aec14958fe34410a4cd991583080f554d41674f67fba\": container with ID starting with 53195d022deb479b90e9aec14958fe34410a4cd991583080f554d41674f67fba not found: ID does not exist" containerID="53195d022deb479b90e9aec14958fe34410a4cd991583080f554d41674f67fba" Jan 27 14:07:08 crc kubenswrapper[4914]: I0127 14:07:08.781908 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53195d022deb479b90e9aec14958fe34410a4cd991583080f554d41674f67fba"} err="failed to get container status \"53195d022deb479b90e9aec14958fe34410a4cd991583080f554d41674f67fba\": rpc error: code = NotFound desc = could not find container \"53195d022deb479b90e9aec14958fe34410a4cd991583080f554d41674f67fba\": container with ID starting with 53195d022deb479b90e9aec14958fe34410a4cd991583080f554d41674f67fba not found: ID does not exist" Jan 27 14:07:09 crc kubenswrapper[4914]: I0127 14:07:09.727262 4914 generic.go:334] "Generic (PLEG): container finished" podID="58d81784-ad81-47ce-befb-d2ec09617b1c" containerID="d3a53c20290ae91c6599b263904d53c65f4ebaa727da8689e26c279ecee27a54" exitCode=0 Jan 27 14:07:09 crc kubenswrapper[4914]: I0127 14:07:09.727395 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xbz4b" event={"ID":"58d81784-ad81-47ce-befb-d2ec09617b1c","Type":"ContainerDied","Data":"d3a53c20290ae91c6599b263904d53c65f4ebaa727da8689e26c279ecee27a54"} Jan 27 14:07:10 crc kubenswrapper[4914]: I0127 14:07:10.305672 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87a4ea21-5742-4829-aa81-d62dc8f5f5e4" path="/var/lib/kubelet/pods/87a4ea21-5742-4829-aa81-d62dc8f5f5e4/volumes" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.073033 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xbz4b" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.172942 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-scripts\") pod \"58d81784-ad81-47ce-befb-d2ec09617b1c\" (UID: \"58d81784-ad81-47ce-befb-d2ec09617b1c\") " Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.173003 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-config-data\") pod \"58d81784-ad81-47ce-befb-d2ec09617b1c\" (UID: \"58d81784-ad81-47ce-befb-d2ec09617b1c\") " Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.173156 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-combined-ca-bundle\") pod \"58d81784-ad81-47ce-befb-d2ec09617b1c\" (UID: \"58d81784-ad81-47ce-befb-d2ec09617b1c\") " Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.173216 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65qxk\" (UniqueName: \"kubernetes.io/projected/58d81784-ad81-47ce-befb-d2ec09617b1c-kube-api-access-65qxk\") pod \"58d81784-ad81-47ce-befb-d2ec09617b1c\" (UID: \"58d81784-ad81-47ce-befb-d2ec09617b1c\") " Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.179039 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-scripts" (OuterVolumeSpecName: "scripts") pod "58d81784-ad81-47ce-befb-d2ec09617b1c" (UID: "58d81784-ad81-47ce-befb-d2ec09617b1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.179051 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58d81784-ad81-47ce-befb-d2ec09617b1c-kube-api-access-65qxk" (OuterVolumeSpecName: "kube-api-access-65qxk") pod "58d81784-ad81-47ce-befb-d2ec09617b1c" (UID: "58d81784-ad81-47ce-befb-d2ec09617b1c"). InnerVolumeSpecName "kube-api-access-65qxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.200975 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-config-data" (OuterVolumeSpecName: "config-data") pod "58d81784-ad81-47ce-befb-d2ec09617b1c" (UID: "58d81784-ad81-47ce-befb-d2ec09617b1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.205008 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58d81784-ad81-47ce-befb-d2ec09617b1c" (UID: "58d81784-ad81-47ce-befb-d2ec09617b1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.274920 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.275093 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.275106 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58d81784-ad81-47ce-befb-d2ec09617b1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.275118 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65qxk\" (UniqueName: \"kubernetes.io/projected/58d81784-ad81-47ce-befb-d2ec09617b1c-kube-api-access-65qxk\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.750288 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xbz4b" event={"ID":"58d81784-ad81-47ce-befb-d2ec09617b1c","Type":"ContainerDied","Data":"3801ab92ddbcfd68078db723f3d49b66af3f4e8a7e3fe1e0cbdc88d83e0a7e07"} Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.750327 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3801ab92ddbcfd68078db723f3d49b66af3f4e8a7e3fe1e0cbdc88d83e0a7e07" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.750376 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xbz4b" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.916513 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:07:11 crc kubenswrapper[4914]: E0127 14:07:11.917033 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416da471-329e-45f8-b786-e8841f575f20" containerName="barbican-keystone-listener-log" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917051 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="416da471-329e-45f8-b786-e8841f575f20" containerName="barbican-keystone-listener-log" Jan 27 14:07:11 crc kubenswrapper[4914]: E0127 14:07:11.917068 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ca6012-ae4a-45ab-8975-9de943d2f790" containerName="barbican-worker-log" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917076 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ca6012-ae4a-45ab-8975-9de943d2f790" containerName="barbican-worker-log" Jan 27 14:07:11 crc kubenswrapper[4914]: E0127 14:07:11.917090 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a4ea21-5742-4829-aa81-d62dc8f5f5e4" containerName="placement-log" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917100 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a4ea21-5742-4829-aa81-d62dc8f5f5e4" containerName="placement-log" Jan 27 14:07:11 crc kubenswrapper[4914]: E0127 14:07:11.917114 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41af7c55-6669-403f-b4cd-e62be1cd1db7" containerName="neutron-api" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917122 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="41af7c55-6669-403f-b4cd-e62be1cd1db7" containerName="neutron-api" Jan 27 14:07:11 crc kubenswrapper[4914]: E0127 14:07:11.917135 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a4ea21-5742-4829-aa81-d62dc8f5f5e4" containerName="placement-api" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917143 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a4ea21-5742-4829-aa81-d62dc8f5f5e4" containerName="placement-api" Jan 27 14:07:11 crc kubenswrapper[4914]: E0127 14:07:11.917161 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a1896c-aac3-4c71-8d04-e608cc34f5f6" containerName="proxy-server" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917168 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a1896c-aac3-4c71-8d04-e608cc34f5f6" containerName="proxy-server" Jan 27 14:07:11 crc kubenswrapper[4914]: E0127 14:07:11.917186 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ca6012-ae4a-45ab-8975-9de943d2f790" containerName="barbican-worker" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917194 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ca6012-ae4a-45ab-8975-9de943d2f790" containerName="barbican-worker" Jan 27 14:07:11 crc kubenswrapper[4914]: E0127 14:07:11.917205 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a1896c-aac3-4c71-8d04-e608cc34f5f6" containerName="proxy-httpd" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917214 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a1896c-aac3-4c71-8d04-e608cc34f5f6" containerName="proxy-httpd" Jan 27 14:07:11 crc kubenswrapper[4914]: E0127 14:07:11.917234 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d81784-ad81-47ce-befb-d2ec09617b1c" containerName="nova-cell0-conductor-db-sync" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917242 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d81784-ad81-47ce-befb-d2ec09617b1c" containerName="nova-cell0-conductor-db-sync" Jan 27 14:07:11 crc kubenswrapper[4914]: E0127 14:07:11.917261 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416da471-329e-45f8-b786-e8841f575f20" containerName="barbican-keystone-listener" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917271 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="416da471-329e-45f8-b786-e8841f575f20" containerName="barbican-keystone-listener" Jan 27 14:07:11 crc kubenswrapper[4914]: E0127 14:07:11.917289 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41af7c55-6669-403f-b4cd-e62be1cd1db7" containerName="neutron-httpd" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917297 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="41af7c55-6669-403f-b4cd-e62be1cd1db7" containerName="neutron-httpd" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917504 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d81784-ad81-47ce-befb-d2ec09617b1c" containerName="nova-cell0-conductor-db-sync" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917522 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ca6012-ae4a-45ab-8975-9de943d2f790" containerName="barbican-worker" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917534 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="416da471-329e-45f8-b786-e8841f575f20" containerName="barbican-keystone-listener" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917551 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a4ea21-5742-4829-aa81-d62dc8f5f5e4" containerName="placement-log" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917564 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a1896c-aac3-4c71-8d04-e608cc34f5f6" containerName="proxy-httpd" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917578 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a1896c-aac3-4c71-8d04-e608cc34f5f6" containerName="proxy-server" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917588 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a4ea21-5742-4829-aa81-d62dc8f5f5e4" containerName="placement-api" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917605 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ca6012-ae4a-45ab-8975-9de943d2f790" containerName="barbican-worker-log" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917619 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="416da471-329e-45f8-b786-e8841f575f20" containerName="barbican-keystone-listener-log" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917633 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="41af7c55-6669-403f-b4cd-e62be1cd1db7" containerName="neutron-api" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.917651 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="41af7c55-6669-403f-b4cd-e62be1cd1db7" containerName="neutron-httpd" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.919450 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.928283 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.952243 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.952563 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-qfkwf" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.990292 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e452ff2e-dbbd-484d-80b0-45883aa5fca3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e452ff2e-dbbd-484d-80b0-45883aa5fca3\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.990384 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e452ff2e-dbbd-484d-80b0-45883aa5fca3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e452ff2e-dbbd-484d-80b0-45883aa5fca3\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:07:11 crc kubenswrapper[4914]: I0127 14:07:11.990539 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqqgq\" (UniqueName: \"kubernetes.io/projected/e452ff2e-dbbd-484d-80b0-45883aa5fca3-kube-api-access-pqqgq\") pod \"nova-cell0-conductor-0\" (UID: \"e452ff2e-dbbd-484d-80b0-45883aa5fca3\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:07:12 crc kubenswrapper[4914]: I0127 14:07:12.092249 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e452ff2e-dbbd-484d-80b0-45883aa5fca3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e452ff2e-dbbd-484d-80b0-45883aa5fca3\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:07:12 crc kubenswrapper[4914]: I0127 14:07:12.092402 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqqgq\" (UniqueName: \"kubernetes.io/projected/e452ff2e-dbbd-484d-80b0-45883aa5fca3-kube-api-access-pqqgq\") pod \"nova-cell0-conductor-0\" (UID: \"e452ff2e-dbbd-484d-80b0-45883aa5fca3\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:07:12 crc kubenswrapper[4914]: I0127 14:07:12.092510 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e452ff2e-dbbd-484d-80b0-45883aa5fca3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e452ff2e-dbbd-484d-80b0-45883aa5fca3\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:07:12 crc kubenswrapper[4914]: I0127 14:07:12.097108 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e452ff2e-dbbd-484d-80b0-45883aa5fca3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e452ff2e-dbbd-484d-80b0-45883aa5fca3\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:07:12 crc kubenswrapper[4914]: I0127 14:07:12.097455 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e452ff2e-dbbd-484d-80b0-45883aa5fca3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e452ff2e-dbbd-484d-80b0-45883aa5fca3\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:07:12 crc kubenswrapper[4914]: I0127 14:07:12.111001 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqqgq\" (UniqueName: \"kubernetes.io/projected/e452ff2e-dbbd-484d-80b0-45883aa5fca3-kube-api-access-pqqgq\") pod \"nova-cell0-conductor-0\" (UID: \"e452ff2e-dbbd-484d-80b0-45883aa5fca3\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:07:12 crc kubenswrapper[4914]: I0127 14:07:12.269970 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 14:07:12 crc kubenswrapper[4914]: W0127 14:07:12.572577 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode452ff2e_dbbd_484d_80b0_45883aa5fca3.slice/crio-c9858480c6607043a1d248d72a8ccf387bf2ebe6328dc7d241f52f095e8f6eee WatchSource:0}: Error finding container c9858480c6607043a1d248d72a8ccf387bf2ebe6328dc7d241f52f095e8f6eee: Status 404 returned error can't find the container with id c9858480c6607043a1d248d72a8ccf387bf2ebe6328dc7d241f52f095e8f6eee Jan 27 14:07:12 crc kubenswrapper[4914]: I0127 14:07:12.582989 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:07:12 crc kubenswrapper[4914]: I0127 14:07:12.761111 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e452ff2e-dbbd-484d-80b0-45883aa5fca3","Type":"ContainerStarted","Data":"c9858480c6607043a1d248d72a8ccf387bf2ebe6328dc7d241f52f095e8f6eee"} Jan 27 14:07:13 crc kubenswrapper[4914]: I0127 14:07:13.771046 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e452ff2e-dbbd-484d-80b0-45883aa5fca3","Type":"ContainerStarted","Data":"e20bc525535dca84894bea9c8166d5febea6348255cb0f9d2fb922214417f693"} Jan 27 14:07:13 crc kubenswrapper[4914]: I0127 14:07:13.772210 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 14:07:13 crc kubenswrapper[4914]: I0127 14:07:13.793179 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.793157915 podStartE2EDuration="2.793157915s" podCreationTimestamp="2026-01-27 14:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:13.791715194 +0000 UTC m=+1392.104065289" watchObservedRunningTime="2026-01-27 14:07:13.793157915 +0000 UTC m=+1392.105508000" Jan 27 14:07:15 crc kubenswrapper[4914]: I0127 14:07:15.835976 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.300065 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.763127 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-hvsxg"] Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.771171 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hvsxg" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.773538 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.773693 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.776050 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hvsxg"] Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.808104 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-scripts\") pod \"nova-cell0-cell-mapping-hvsxg\" (UID: \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\") " pod="openstack/nova-cell0-cell-mapping-hvsxg" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.808143 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-config-data\") pod \"nova-cell0-cell-mapping-hvsxg\" (UID: \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\") " pod="openstack/nova-cell0-cell-mapping-hvsxg" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.808169 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jv5w\" (UniqueName: \"kubernetes.io/projected/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-kube-api-access-6jv5w\") pod \"nova-cell0-cell-mapping-hvsxg\" (UID: \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\") " pod="openstack/nova-cell0-cell-mapping-hvsxg" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.808186 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hvsxg\" (UID: \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\") " pod="openstack/nova-cell0-cell-mapping-hvsxg" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.909898 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-scripts\") pod \"nova-cell0-cell-mapping-hvsxg\" (UID: \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\") " pod="openstack/nova-cell0-cell-mapping-hvsxg" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.909940 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-config-data\") pod \"nova-cell0-cell-mapping-hvsxg\" (UID: \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\") " pod="openstack/nova-cell0-cell-mapping-hvsxg" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.909966 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jv5w\" (UniqueName: \"kubernetes.io/projected/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-kube-api-access-6jv5w\") pod \"nova-cell0-cell-mapping-hvsxg\" (UID: \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\") " pod="openstack/nova-cell0-cell-mapping-hvsxg" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.909987 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hvsxg\" (UID: \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\") " pod="openstack/nova-cell0-cell-mapping-hvsxg" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.915492 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hvsxg\" (UID: \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\") " pod="openstack/nova-cell0-cell-mapping-hvsxg" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.915492 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-scripts\") pod \"nova-cell0-cell-mapping-hvsxg\" (UID: \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\") " pod="openstack/nova-cell0-cell-mapping-hvsxg" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.916146 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-config-data\") pod \"nova-cell0-cell-mapping-hvsxg\" (UID: \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\") " pod="openstack/nova-cell0-cell-mapping-hvsxg" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.943669 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.945349 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.957367 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.958214 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jv5w\" (UniqueName: \"kubernetes.io/projected/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-kube-api-access-6jv5w\") pod \"nova-cell0-cell-mapping-hvsxg\" (UID: \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\") " pod="openstack/nova-cell0-cell-mapping-hvsxg" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.960736 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.961881 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:17 crc kubenswrapper[4914]: I0127 14:07:17.964955 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.015674 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brnjk\" (UniqueName: \"kubernetes.io/projected/f66f9483-7be9-4f55-8e6e-144bbc391d55-kube-api-access-brnjk\") pod \"nova-cell1-novncproxy-0\" (UID: \"f66f9483-7be9-4f55-8e6e-144bbc391d55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.015810 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66f9483-7be9-4f55-8e6e-144bbc391d55-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f66f9483-7be9-4f55-8e6e-144bbc391d55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.015854 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66f9483-7be9-4f55-8e6e-144bbc391d55-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f66f9483-7be9-4f55-8e6e-144bbc391d55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.015897 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e5891e-45f7-4898-8172-773b64a58b3a-config-data\") pod \"nova-api-0\" (UID: \"36e5891e-45f7-4898-8172-773b64a58b3a\") " pod="openstack/nova-api-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.015930 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36e5891e-45f7-4898-8172-773b64a58b3a-logs\") pod \"nova-api-0\" (UID: \"36e5891e-45f7-4898-8172-773b64a58b3a\") " pod="openstack/nova-api-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.015957 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e5891e-45f7-4898-8172-773b64a58b3a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"36e5891e-45f7-4898-8172-773b64a58b3a\") " pod="openstack/nova-api-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.016019 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh4vq\" (UniqueName: \"kubernetes.io/projected/36e5891e-45f7-4898-8172-773b64a58b3a-kube-api-access-mh4vq\") pod \"nova-api-0\" (UID: \"36e5891e-45f7-4898-8172-773b64a58b3a\") " pod="openstack/nova-api-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.016973 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.038125 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.103527 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hvsxg" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.110358 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.112569 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.146233 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.149947 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66f9483-7be9-4f55-8e6e-144bbc391d55-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f66f9483-7be9-4f55-8e6e-144bbc391d55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.150001 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66f9483-7be9-4f55-8e6e-144bbc391d55-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f66f9483-7be9-4f55-8e6e-144bbc391d55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.150043 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e5891e-45f7-4898-8172-773b64a58b3a-config-data\") pod \"nova-api-0\" (UID: \"36e5891e-45f7-4898-8172-773b64a58b3a\") " pod="openstack/nova-api-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.150076 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36e5891e-45f7-4898-8172-773b64a58b3a-logs\") pod \"nova-api-0\" (UID: \"36e5891e-45f7-4898-8172-773b64a58b3a\") " pod="openstack/nova-api-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.150102 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e5891e-45f7-4898-8172-773b64a58b3a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"36e5891e-45f7-4898-8172-773b64a58b3a\") " pod="openstack/nova-api-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.150159 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh4vq\" (UniqueName: \"kubernetes.io/projected/36e5891e-45f7-4898-8172-773b64a58b3a-kube-api-access-mh4vq\") pod \"nova-api-0\" (UID: \"36e5891e-45f7-4898-8172-773b64a58b3a\") " pod="openstack/nova-api-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.150288 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brnjk\" (UniqueName: \"kubernetes.io/projected/f66f9483-7be9-4f55-8e6e-144bbc391d55-kube-api-access-brnjk\") pod \"nova-cell1-novncproxy-0\" (UID: \"f66f9483-7be9-4f55-8e6e-144bbc391d55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.152073 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36e5891e-45f7-4898-8172-773b64a58b3a-logs\") pod \"nova-api-0\" (UID: \"36e5891e-45f7-4898-8172-773b64a58b3a\") " pod="openstack/nova-api-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.173891 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e5891e-45f7-4898-8172-773b64a58b3a-config-data\") pod \"nova-api-0\" (UID: \"36e5891e-45f7-4898-8172-773b64a58b3a\") " pod="openstack/nova-api-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.176524 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e5891e-45f7-4898-8172-773b64a58b3a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"36e5891e-45f7-4898-8172-773b64a58b3a\") " pod="openstack/nova-api-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.194531 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66f9483-7be9-4f55-8e6e-144bbc391d55-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f66f9483-7be9-4f55-8e6e-144bbc391d55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.195206 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh4vq\" (UniqueName: \"kubernetes.io/projected/36e5891e-45f7-4898-8172-773b64a58b3a-kube-api-access-mh4vq\") pod \"nova-api-0\" (UID: \"36e5891e-45f7-4898-8172-773b64a58b3a\") " pod="openstack/nova-api-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.202435 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66f9483-7be9-4f55-8e6e-144bbc391d55-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f66f9483-7be9-4f55-8e6e-144bbc391d55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.206758 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brnjk\" (UniqueName: \"kubernetes.io/projected/f66f9483-7be9-4f55-8e6e-144bbc391d55-kube-api-access-brnjk\") pod \"nova-cell1-novncproxy-0\" (UID: \"f66f9483-7be9-4f55-8e6e-144bbc391d55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.254779 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a64463da-43d7-4c6b-819f-b7dd45a891d0-logs\") pod \"nova-metadata-0\" (UID: \"a64463da-43d7-4c6b-819f-b7dd45a891d0\") " pod="openstack/nova-metadata-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.254981 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zzbl\" (UniqueName: \"kubernetes.io/projected/a64463da-43d7-4c6b-819f-b7dd45a891d0-kube-api-access-7zzbl\") pod \"nova-metadata-0\" (UID: \"a64463da-43d7-4c6b-819f-b7dd45a891d0\") " pod="openstack/nova-metadata-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.255117 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64463da-43d7-4c6b-819f-b7dd45a891d0-config-data\") pod \"nova-metadata-0\" (UID: \"a64463da-43d7-4c6b-819f-b7dd45a891d0\") " pod="openstack/nova-metadata-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.255353 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64463da-43d7-4c6b-819f-b7dd45a891d0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a64463da-43d7-4c6b-819f-b7dd45a891d0\") " pod="openstack/nova-metadata-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.259029 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.326555 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.329395 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.338753 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.340374 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.354372 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.356341 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zzbl\" (UniqueName: \"kubernetes.io/projected/a64463da-43d7-4c6b-819f-b7dd45a891d0-kube-api-access-7zzbl\") pod \"nova-metadata-0\" (UID: \"a64463da-43d7-4c6b-819f-b7dd45a891d0\") " pod="openstack/nova-metadata-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.356428 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64463da-43d7-4c6b-819f-b7dd45a891d0-config-data\") pod \"nova-metadata-0\" (UID: \"a64463da-43d7-4c6b-819f-b7dd45a891d0\") " pod="openstack/nova-metadata-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.356458 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64463da-43d7-4c6b-819f-b7dd45a891d0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a64463da-43d7-4c6b-819f-b7dd45a891d0\") " pod="openstack/nova-metadata-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.361806 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-6wnl7"] Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.364773 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a64463da-43d7-4c6b-819f-b7dd45a891d0-logs\") pod \"nova-metadata-0\" (UID: \"a64463da-43d7-4c6b-819f-b7dd45a891d0\") " pod="openstack/nova-metadata-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.367875 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.368664 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a64463da-43d7-4c6b-819f-b7dd45a891d0-logs\") pod \"nova-metadata-0\" (UID: \"a64463da-43d7-4c6b-819f-b7dd45a891d0\") " pod="openstack/nova-metadata-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.369123 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64463da-43d7-4c6b-819f-b7dd45a891d0-config-data\") pod \"nova-metadata-0\" (UID: \"a64463da-43d7-4c6b-819f-b7dd45a891d0\") " pod="openstack/nova-metadata-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.370977 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64463da-43d7-4c6b-819f-b7dd45a891d0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a64463da-43d7-4c6b-819f-b7dd45a891d0\") " pod="openstack/nova-metadata-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.374516 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-6wnl7"] Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.389821 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zzbl\" (UniqueName: \"kubernetes.io/projected/a64463da-43d7-4c6b-819f-b7dd45a891d0-kube-api-access-7zzbl\") pod \"nova-metadata-0\" (UID: \"a64463da-43d7-4c6b-819f-b7dd45a891d0\") " pod="openstack/nova-metadata-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.405053 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.467331 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.467453 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.467484 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b24ff69-cfa5-4f60-a771-81dd0abda624-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8b24ff69-cfa5-4f60-a771-81dd0abda624\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.467539 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-config\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.467591 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.467616 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsmlx\" (UniqueName: \"kubernetes.io/projected/8b24ff69-cfa5-4f60-a771-81dd0abda624-kube-api-access-rsmlx\") pod \"nova-scheduler-0\" (UID: \"8b24ff69-cfa5-4f60-a771-81dd0abda624\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.467659 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b24ff69-cfa5-4f60-a771-81dd0abda624-config-data\") pod \"nova-scheduler-0\" (UID: \"8b24ff69-cfa5-4f60-a771-81dd0abda624\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.467701 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kptzm\" (UniqueName: \"kubernetes.io/projected/a108d6f6-d3b1-4480-b9d4-ff8273c10546-kube-api-access-kptzm\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.467738 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: W0127 14:07:18.550227 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b3f7735_ffe2_40bc_9055_67f89a4a3a95.slice/crio-d2b37b146423ecbac8e363d1ee68a93c8c9eef95bed128404f80de60579566e3 WatchSource:0}: Error finding container d2b37b146423ecbac8e363d1ee68a93c8c9eef95bed128404f80de60579566e3: Status 404 returned error can't find the container with id d2b37b146423ecbac8e363d1ee68a93c8c9eef95bed128404f80de60579566e3 Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.551196 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hvsxg"] Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.569201 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.569263 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b24ff69-cfa5-4f60-a771-81dd0abda624-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8b24ff69-cfa5-4f60-a771-81dd0abda624\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.569312 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-config\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.569355 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.569377 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsmlx\" (UniqueName: \"kubernetes.io/projected/8b24ff69-cfa5-4f60-a771-81dd0abda624-kube-api-access-rsmlx\") pod \"nova-scheduler-0\" (UID: \"8b24ff69-cfa5-4f60-a771-81dd0abda624\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.569418 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b24ff69-cfa5-4f60-a771-81dd0abda624-config-data\") pod \"nova-scheduler-0\" (UID: \"8b24ff69-cfa5-4f60-a771-81dd0abda624\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.569467 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kptzm\" (UniqueName: \"kubernetes.io/projected/a108d6f6-d3b1-4480-b9d4-ff8273c10546-kube-api-access-kptzm\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.569508 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.569579 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.570027 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.574003 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.574384 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.576218 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b24ff69-cfa5-4f60-a771-81dd0abda624-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8b24ff69-cfa5-4f60-a771-81dd0abda624\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.576817 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b24ff69-cfa5-4f60-a771-81dd0abda624-config-data\") pod \"nova-scheduler-0\" (UID: \"8b24ff69-cfa5-4f60-a771-81dd0abda624\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.583438 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.583563 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-config\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.585639 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kptzm\" (UniqueName: \"kubernetes.io/projected/a108d6f6-d3b1-4480-b9d4-ff8273c10546-kube-api-access-kptzm\") pod \"dnsmasq-dns-557bbc7df7-6wnl7\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.588155 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsmlx\" (UniqueName: \"kubernetes.io/projected/8b24ff69-cfa5-4f60-a771-81dd0abda624-kube-api-access-rsmlx\") pod \"nova-scheduler-0\" (UID: \"8b24ff69-cfa5-4f60-a771-81dd0abda624\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.643627 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.657451 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.745842 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:18 crc kubenswrapper[4914]: I0127 14:07:18.854954 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hvsxg" event={"ID":"3b3f7735-ffe2-40bc-9055-67f89a4a3a95","Type":"ContainerStarted","Data":"d2b37b146423ecbac8e363d1ee68a93c8c9eef95bed128404f80de60579566e3"} Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.072891 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.187800 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.263797 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m6sr7"] Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.265157 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-m6sr7" Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.269921 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.270023 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.281349 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m6sr7"] Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.307427 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:07:19 crc kubenswrapper[4914]: W0127 14:07:19.312989 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b24ff69_cfa5_4f60_a771_81dd0abda624.slice/crio-c183a32bfaa9ec0e6be78cbcb1966e58fe77800e2be545a5d30ae6bca079eb88 WatchSource:0}: Error finding container c183a32bfaa9ec0e6be78cbcb1966e58fe77800e2be545a5d30ae6bca079eb88: Status 404 returned error can't find the container with id c183a32bfaa9ec0e6be78cbcb1966e58fe77800e2be545a5d30ae6bca079eb88 Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.332510 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.398051 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-scripts\") pod \"nova-cell1-conductor-db-sync-m6sr7\" (UID: \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\") " pod="openstack/nova-cell1-conductor-db-sync-m6sr7" Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.398111 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdf8q\" (UniqueName: \"kubernetes.io/projected/1d67207a-f8f7-4b0d-aa50-be147a8ba810-kube-api-access-rdf8q\") pod \"nova-cell1-conductor-db-sync-m6sr7\" (UID: \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\") " pod="openstack/nova-cell1-conductor-db-sync-m6sr7" Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.398152 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-config-data\") pod \"nova-cell1-conductor-db-sync-m6sr7\" (UID: \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\") " pod="openstack/nova-cell1-conductor-db-sync-m6sr7" Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.398212 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-m6sr7\" (UID: \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\") " pod="openstack/nova-cell1-conductor-db-sync-m6sr7" Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.425280 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-6wnl7"] Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.500326 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-scripts\") pod \"nova-cell1-conductor-db-sync-m6sr7\" (UID: \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\") " pod="openstack/nova-cell1-conductor-db-sync-m6sr7" Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.500824 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdf8q\" (UniqueName: \"kubernetes.io/projected/1d67207a-f8f7-4b0d-aa50-be147a8ba810-kube-api-access-rdf8q\") pod \"nova-cell1-conductor-db-sync-m6sr7\" (UID: \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\") " pod="openstack/nova-cell1-conductor-db-sync-m6sr7" Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.500878 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-config-data\") pod \"nova-cell1-conductor-db-sync-m6sr7\" (UID: \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\") " pod="openstack/nova-cell1-conductor-db-sync-m6sr7" Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.500931 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-m6sr7\" (UID: \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\") " pod="openstack/nova-cell1-conductor-db-sync-m6sr7" Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.505135 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-scripts\") pod \"nova-cell1-conductor-db-sync-m6sr7\" (UID: \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\") " pod="openstack/nova-cell1-conductor-db-sync-m6sr7" Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.505423 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-m6sr7\" (UID: \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\") " pod="openstack/nova-cell1-conductor-db-sync-m6sr7" Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.505729 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-config-data\") pod \"nova-cell1-conductor-db-sync-m6sr7\" (UID: \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\") " pod="openstack/nova-cell1-conductor-db-sync-m6sr7" Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.518698 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdf8q\" (UniqueName: \"kubernetes.io/projected/1d67207a-f8f7-4b0d-aa50-be147a8ba810-kube-api-access-rdf8q\") pod \"nova-cell1-conductor-db-sync-m6sr7\" (UID: \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\") " pod="openstack/nova-cell1-conductor-db-sync-m6sr7" Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.692479 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-m6sr7" Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.874495 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36e5891e-45f7-4898-8172-773b64a58b3a","Type":"ContainerStarted","Data":"f7ee44a454d44c7193b1477c3235d4e1965e8dc6b16f6ac4215d0dda5ee192fc"} Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.880291 4914 generic.go:334] "Generic (PLEG): container finished" podID="a108d6f6-d3b1-4480-b9d4-ff8273c10546" containerID="0c93aeef90a993012d9bdc80253d53ca4088501f65a21bb65c2d958a9e1ae22b" exitCode=0 Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.880378 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" event={"ID":"a108d6f6-d3b1-4480-b9d4-ff8273c10546","Type":"ContainerDied","Data":"0c93aeef90a993012d9bdc80253d53ca4088501f65a21bb65c2d958a9e1ae22b"} Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.880418 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" event={"ID":"a108d6f6-d3b1-4480-b9d4-ff8273c10546","Type":"ContainerStarted","Data":"d5b38ba496afe19a1b3c78cec05117edc65b7ca27be96bdd27257b308458e6c0"} Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.888344 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f66f9483-7be9-4f55-8e6e-144bbc391d55","Type":"ContainerStarted","Data":"4613dd9b0c7b3a0b694c3c7ce930e381e5f434f8a6182e56ae39516cf6b245b8"} Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.894439 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8b24ff69-cfa5-4f60-a771-81dd0abda624","Type":"ContainerStarted","Data":"c183a32bfaa9ec0e6be78cbcb1966e58fe77800e2be545a5d30ae6bca079eb88"} Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.908874 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hvsxg" event={"ID":"3b3f7735-ffe2-40bc-9055-67f89a4a3a95","Type":"ContainerStarted","Data":"4fab63ecbb2167d4aa10709d657a7afa1815e139ebae6fe7d1cb9e61155c09df"} Jan 27 14:07:19 crc kubenswrapper[4914]: I0127 14:07:19.913592 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a64463da-43d7-4c6b-819f-b7dd45a891d0","Type":"ContainerStarted","Data":"7ccd0483f7f3787d0b3074284d71431e5b40c6d86361e0a06aefcb483d5403ae"} Jan 27 14:07:20 crc kubenswrapper[4914]: I0127 14:07:20.169489 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-hvsxg" podStartSLOduration=3.169470878 podStartE2EDuration="3.169470878s" podCreationTimestamp="2026-01-27 14:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:19.941860522 +0000 UTC m=+1398.254210627" watchObservedRunningTime="2026-01-27 14:07:20.169470878 +0000 UTC m=+1398.481820983" Jan 27 14:07:20 crc kubenswrapper[4914]: I0127 14:07:20.177304 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m6sr7"] Jan 27 14:07:20 crc kubenswrapper[4914]: I0127 14:07:20.927759 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" event={"ID":"a108d6f6-d3b1-4480-b9d4-ff8273c10546","Type":"ContainerStarted","Data":"5ceb4ed78648f4597d51777c757f02a0b246eed38d9872e9ed2ab3f53a2d196b"} Jan 27 14:07:20 crc kubenswrapper[4914]: I0127 14:07:20.928595 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:20 crc kubenswrapper[4914]: I0127 14:07:20.931022 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-m6sr7" event={"ID":"1d67207a-f8f7-4b0d-aa50-be147a8ba810","Type":"ContainerStarted","Data":"7771ca1de9d47c34ff125fa812e479371d69aae563a4f8ff18c5927911fe867c"} Jan 27 14:07:20 crc kubenswrapper[4914]: I0127 14:07:20.931078 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-m6sr7" event={"ID":"1d67207a-f8f7-4b0d-aa50-be147a8ba810","Type":"ContainerStarted","Data":"b06fd81dd6b136cfdd40cd27af90814a8e71c9b966659ba8b754574cc5ed436c"} Jan 27 14:07:20 crc kubenswrapper[4914]: I0127 14:07:20.955315 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" podStartSLOduration=2.955292052 podStartE2EDuration="2.955292052s" podCreationTimestamp="2026-01-27 14:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:20.946244794 +0000 UTC m=+1399.258594899" watchObservedRunningTime="2026-01-27 14:07:20.955292052 +0000 UTC m=+1399.267642137" Jan 27 14:07:20 crc kubenswrapper[4914]: I0127 14:07:20.967460 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-m6sr7" podStartSLOduration=1.967437126 podStartE2EDuration="1.967437126s" podCreationTimestamp="2026-01-27 14:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:20.966809459 +0000 UTC m=+1399.279159554" watchObservedRunningTime="2026-01-27 14:07:20.967437126 +0000 UTC m=+1399.279787221" Jan 27 14:07:21 crc kubenswrapper[4914]: I0127 14:07:21.539118 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:07:21 crc kubenswrapper[4914]: I0127 14:07:21.560687 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:07:23 crc kubenswrapper[4914]: I0127 14:07:23.964944 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f66f9483-7be9-4f55-8e6e-144bbc391d55","Type":"ContainerStarted","Data":"d764b8cbe0290c40d76acdc15e120485bd5438556fe3c0346685b7ec6849ff51"} Jan 27 14:07:23 crc kubenswrapper[4914]: I0127 14:07:23.965263 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f66f9483-7be9-4f55-8e6e-144bbc391d55" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d764b8cbe0290c40d76acdc15e120485bd5438556fe3c0346685b7ec6849ff51" gracePeriod=30 Jan 27 14:07:23 crc kubenswrapper[4914]: I0127 14:07:23.970621 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8b24ff69-cfa5-4f60-a771-81dd0abda624","Type":"ContainerStarted","Data":"706aa5bb3437d8c68ea8dd9b787a21ad8d21686bd2e751e75a7ee53ec131a443"} Jan 27 14:07:23 crc kubenswrapper[4914]: I0127 14:07:23.977337 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a64463da-43d7-4c6b-819f-b7dd45a891d0","Type":"ContainerStarted","Data":"fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3"} Jan 27 14:07:23 crc kubenswrapper[4914]: I0127 14:07:23.977399 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a64463da-43d7-4c6b-819f-b7dd45a891d0","Type":"ContainerStarted","Data":"097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e"} Jan 27 14:07:23 crc kubenswrapper[4914]: I0127 14:07:23.977561 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a64463da-43d7-4c6b-819f-b7dd45a891d0" containerName="nova-metadata-log" containerID="cri-o://097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e" gracePeriod=30 Jan 27 14:07:23 crc kubenswrapper[4914]: I0127 14:07:23.977908 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a64463da-43d7-4c6b-819f-b7dd45a891d0" containerName="nova-metadata-metadata" containerID="cri-o://fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3" gracePeriod=30 Jan 27 14:07:23 crc kubenswrapper[4914]: I0127 14:07:23.982253 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:07:23 crc kubenswrapper[4914]: I0127 14:07:23.982582 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="cc356ffd-559b-403b-8f0f-8bb7518dd9b7" containerName="kube-state-metrics" containerID="cri-o://c83a94901e286d47329b27f22a945245dc9e3c483592f6ddb395b948506ae9d6" gracePeriod=30 Jan 27 14:07:23 crc kubenswrapper[4914]: I0127 14:07:23.982684 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36e5891e-45f7-4898-8172-773b64a58b3a","Type":"ContainerStarted","Data":"83ee45e6c3e1df1a7a7d8c146c2b4685b950eb93db9359556e1044d3c3cf2b35"} Jan 27 14:07:23 crc kubenswrapper[4914]: I0127 14:07:23.982858 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36e5891e-45f7-4898-8172-773b64a58b3a","Type":"ContainerStarted","Data":"6ad23d27f4a81ff0f796a2c63d93992d9d17cac7fefd9455b2b90c371e9c6256"} Jan 27 14:07:23 crc kubenswrapper[4914]: I0127 14:07:23.993164 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.290441568 podStartE2EDuration="6.99313876s" podCreationTimestamp="2026-01-27 14:07:17 +0000 UTC" firstStartedPulling="2026-01-27 14:07:19.066569201 +0000 UTC m=+1397.378919286" lastFinishedPulling="2026-01-27 14:07:22.769266393 +0000 UTC m=+1401.081616478" observedRunningTime="2026-01-27 14:07:23.990304832 +0000 UTC m=+1402.302654917" watchObservedRunningTime="2026-01-27 14:07:23.99313876 +0000 UTC m=+1402.305488845" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.038027 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.490186729 podStartE2EDuration="7.037999861s" podCreationTimestamp="2026-01-27 14:07:17 +0000 UTC" firstStartedPulling="2026-01-27 14:07:19.217637817 +0000 UTC m=+1397.529987902" lastFinishedPulling="2026-01-27 14:07:22.765450949 +0000 UTC m=+1401.077801034" observedRunningTime="2026-01-27 14:07:24.019310638 +0000 UTC m=+1402.331660733" watchObservedRunningTime="2026-01-27 14:07:24.037999861 +0000 UTC m=+1402.350349956" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.047339 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.599523479 podStartE2EDuration="6.047318916s" podCreationTimestamp="2026-01-27 14:07:18 +0000 UTC" firstStartedPulling="2026-01-27 14:07:19.316399907 +0000 UTC m=+1397.628749992" lastFinishedPulling="2026-01-27 14:07:22.764195344 +0000 UTC m=+1401.076545429" observedRunningTime="2026-01-27 14:07:24.043096921 +0000 UTC m=+1402.355447006" watchObservedRunningTime="2026-01-27 14:07:24.047318916 +0000 UTC m=+1402.359669001" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.066629 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.615963641 podStartE2EDuration="6.066604566s" podCreationTimestamp="2026-01-27 14:07:18 +0000 UTC" firstStartedPulling="2026-01-27 14:07:19.31651874 +0000 UTC m=+1397.628868815" lastFinishedPulling="2026-01-27 14:07:22.767159665 +0000 UTC m=+1401.079509740" observedRunningTime="2026-01-27 14:07:24.059057908 +0000 UTC m=+1402.371408003" watchObservedRunningTime="2026-01-27 14:07:24.066604566 +0000 UTC m=+1402.378954661" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.501929 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.536682 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.626016 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a64463da-43d7-4c6b-819f-b7dd45a891d0-logs\") pod \"a64463da-43d7-4c6b-819f-b7dd45a891d0\" (UID: \"a64463da-43d7-4c6b-819f-b7dd45a891d0\") " Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.626131 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zzbl\" (UniqueName: \"kubernetes.io/projected/a64463da-43d7-4c6b-819f-b7dd45a891d0-kube-api-access-7zzbl\") pod \"a64463da-43d7-4c6b-819f-b7dd45a891d0\" (UID: \"a64463da-43d7-4c6b-819f-b7dd45a891d0\") " Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.626180 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64463da-43d7-4c6b-819f-b7dd45a891d0-config-data\") pod \"a64463da-43d7-4c6b-819f-b7dd45a891d0\" (UID: \"a64463da-43d7-4c6b-819f-b7dd45a891d0\") " Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.626200 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t2ss\" (UniqueName: \"kubernetes.io/projected/cc356ffd-559b-403b-8f0f-8bb7518dd9b7-kube-api-access-2t2ss\") pod \"cc356ffd-559b-403b-8f0f-8bb7518dd9b7\" (UID: \"cc356ffd-559b-403b-8f0f-8bb7518dd9b7\") " Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.626285 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64463da-43d7-4c6b-819f-b7dd45a891d0-logs" (OuterVolumeSpecName: "logs") pod "a64463da-43d7-4c6b-819f-b7dd45a891d0" (UID: "a64463da-43d7-4c6b-819f-b7dd45a891d0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.626291 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64463da-43d7-4c6b-819f-b7dd45a891d0-combined-ca-bundle\") pod \"a64463da-43d7-4c6b-819f-b7dd45a891d0\" (UID: \"a64463da-43d7-4c6b-819f-b7dd45a891d0\") " Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.626692 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a64463da-43d7-4c6b-819f-b7dd45a891d0-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.631444 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc356ffd-559b-403b-8f0f-8bb7518dd9b7-kube-api-access-2t2ss" (OuterVolumeSpecName: "kube-api-access-2t2ss") pod "cc356ffd-559b-403b-8f0f-8bb7518dd9b7" (UID: "cc356ffd-559b-403b-8f0f-8bb7518dd9b7"). InnerVolumeSpecName "kube-api-access-2t2ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.631753 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a64463da-43d7-4c6b-819f-b7dd45a891d0-kube-api-access-7zzbl" (OuterVolumeSpecName: "kube-api-access-7zzbl") pod "a64463da-43d7-4c6b-819f-b7dd45a891d0" (UID: "a64463da-43d7-4c6b-819f-b7dd45a891d0"). InnerVolumeSpecName "kube-api-access-7zzbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.659513 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64463da-43d7-4c6b-819f-b7dd45a891d0-config-data" (OuterVolumeSpecName: "config-data") pod "a64463da-43d7-4c6b-819f-b7dd45a891d0" (UID: "a64463da-43d7-4c6b-819f-b7dd45a891d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.661663 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64463da-43d7-4c6b-819f-b7dd45a891d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a64463da-43d7-4c6b-819f-b7dd45a891d0" (UID: "a64463da-43d7-4c6b-819f-b7dd45a891d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.728267 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zzbl\" (UniqueName: \"kubernetes.io/projected/a64463da-43d7-4c6b-819f-b7dd45a891d0-kube-api-access-7zzbl\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.728318 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64463da-43d7-4c6b-819f-b7dd45a891d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.728333 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t2ss\" (UniqueName: \"kubernetes.io/projected/cc356ffd-559b-403b-8f0f-8bb7518dd9b7-kube-api-access-2t2ss\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.728344 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64463da-43d7-4c6b-819f-b7dd45a891d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.993128 4914 generic.go:334] "Generic (PLEG): container finished" podID="a64463da-43d7-4c6b-819f-b7dd45a891d0" containerID="fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3" exitCode=0 Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.993173 4914 generic.go:334] "Generic (PLEG): container finished" podID="a64463da-43d7-4c6b-819f-b7dd45a891d0" containerID="097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e" exitCode=143 Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.993191 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.993196 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a64463da-43d7-4c6b-819f-b7dd45a891d0","Type":"ContainerDied","Data":"fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3"} Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.993233 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a64463da-43d7-4c6b-819f-b7dd45a891d0","Type":"ContainerDied","Data":"097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e"} Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.993247 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a64463da-43d7-4c6b-819f-b7dd45a891d0","Type":"ContainerDied","Data":"7ccd0483f7f3787d0b3074284d71431e5b40c6d86361e0a06aefcb483d5403ae"} Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.993266 4914 scope.go:117] "RemoveContainer" containerID="fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3" Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.995106 4914 generic.go:334] "Generic (PLEG): container finished" podID="cc356ffd-559b-403b-8f0f-8bb7518dd9b7" containerID="c83a94901e286d47329b27f22a945245dc9e3c483592f6ddb395b948506ae9d6" exitCode=2 Jan 27 14:07:24 crc kubenswrapper[4914]: I0127 14:07:24.995891 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.006163 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc356ffd-559b-403b-8f0f-8bb7518dd9b7","Type":"ContainerDied","Data":"c83a94901e286d47329b27f22a945245dc9e3c483592f6ddb395b948506ae9d6"} Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.006223 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc356ffd-559b-403b-8f0f-8bb7518dd9b7","Type":"ContainerDied","Data":"3aa1459f1e922323f59079f3f99b59e522ea765b0918ce984947c17cde4f0d06"} Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.025722 4914 scope.go:117] "RemoveContainer" containerID="097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.049902 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.064914 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.075111 4914 scope.go:117] "RemoveContainer" containerID="fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3" Jan 27 14:07:25 crc kubenswrapper[4914]: E0127 14:07:25.080640 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3\": container with ID starting with fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3 not found: ID does not exist" containerID="fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.080686 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3"} err="failed to get container status \"fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3\": rpc error: code = NotFound desc = could not find container \"fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3\": container with ID starting with fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3 not found: ID does not exist" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.080714 4914 scope.go:117] "RemoveContainer" containerID="097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e" Jan 27 14:07:25 crc kubenswrapper[4914]: E0127 14:07:25.086669 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e\": container with ID starting with 097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e not found: ID does not exist" containerID="097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.086711 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e"} err="failed to get container status \"097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e\": rpc error: code = NotFound desc = could not find container \"097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e\": container with ID starting with 097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e not found: ID does not exist" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.086737 4914 scope.go:117] "RemoveContainer" containerID="fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.089030 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3"} err="failed to get container status \"fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3\": rpc error: code = NotFound desc = could not find container \"fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3\": container with ID starting with fee98d6c26d37e31ccf9eff27dc11c74e9e82d8641bcf6124c4b6448b6bbc4d3 not found: ID does not exist" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.089083 4914 scope.go:117] "RemoveContainer" containerID="097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.090750 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e"} err="failed to get container status \"097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e\": rpc error: code = NotFound desc = could not find container \"097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e\": container with ID starting with 097923f52466a75c5d02b1333f051f3bac8cdad5fce67e2680b9efd6321cda1e not found: ID does not exist" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.090772 4914 scope.go:117] "RemoveContainer" containerID="c83a94901e286d47329b27f22a945245dc9e3c483592f6ddb395b948506ae9d6" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.105996 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.122265 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:07:25 crc kubenswrapper[4914]: E0127 14:07:25.122734 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc356ffd-559b-403b-8f0f-8bb7518dd9b7" containerName="kube-state-metrics" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.122759 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc356ffd-559b-403b-8f0f-8bb7518dd9b7" containerName="kube-state-metrics" Jan 27 14:07:25 crc kubenswrapper[4914]: E0127 14:07:25.122784 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64463da-43d7-4c6b-819f-b7dd45a891d0" containerName="nova-metadata-metadata" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.122793 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64463da-43d7-4c6b-819f-b7dd45a891d0" containerName="nova-metadata-metadata" Jan 27 14:07:25 crc kubenswrapper[4914]: E0127 14:07:25.122811 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64463da-43d7-4c6b-819f-b7dd45a891d0" containerName="nova-metadata-log" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.122817 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64463da-43d7-4c6b-819f-b7dd45a891d0" containerName="nova-metadata-log" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.123115 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64463da-43d7-4c6b-819f-b7dd45a891d0" containerName="nova-metadata-metadata" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.123132 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc356ffd-559b-403b-8f0f-8bb7518dd9b7" containerName="kube-state-metrics" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.123152 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64463da-43d7-4c6b-819f-b7dd45a891d0" containerName="nova-metadata-log" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.123938 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.125707 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.125965 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.134985 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.145612 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.153162 4914 scope.go:117] "RemoveContainer" containerID="c83a94901e286d47329b27f22a945245dc9e3c483592f6ddb395b948506ae9d6" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.154453 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:07:25 crc kubenswrapper[4914]: E0127 14:07:25.154498 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c83a94901e286d47329b27f22a945245dc9e3c483592f6ddb395b948506ae9d6\": container with ID starting with c83a94901e286d47329b27f22a945245dc9e3c483592f6ddb395b948506ae9d6 not found: ID does not exist" containerID="c83a94901e286d47329b27f22a945245dc9e3c483592f6ddb395b948506ae9d6" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.154540 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c83a94901e286d47329b27f22a945245dc9e3c483592f6ddb395b948506ae9d6"} err="failed to get container status \"c83a94901e286d47329b27f22a945245dc9e3c483592f6ddb395b948506ae9d6\": rpc error: code = NotFound desc = could not find container \"c83a94901e286d47329b27f22a945245dc9e3c483592f6ddb395b948506ae9d6\": container with ID starting with c83a94901e286d47329b27f22a945245dc9e3c483592f6ddb395b948506ae9d6 not found: ID does not exist" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.156109 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.158793 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.161273 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.169607 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.237430 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.237505 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a67de105-d5d3-48f3-a642-ba7be3dc0920-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a67de105-d5d3-48f3-a642-ba7be3dc0920\") " pod="openstack/kube-state-metrics-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.237634 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a67de105-d5d3-48f3-a642-ba7be3dc0920-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a67de105-d5d3-48f3-a642-ba7be3dc0920\") " pod="openstack/kube-state-metrics-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.237701 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67de105-d5d3-48f3-a642-ba7be3dc0920-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a67de105-d5d3-48f3-a642-ba7be3dc0920\") " pod="openstack/kube-state-metrics-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.237769 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dee8248-9605-46f3-8062-979106e8f5f1-logs\") pod \"nova-metadata-0\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.237857 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-config-data\") pod \"nova-metadata-0\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.237892 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.237955 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5nhx\" (UniqueName: \"kubernetes.io/projected/a67de105-d5d3-48f3-a642-ba7be3dc0920-kube-api-access-p5nhx\") pod \"kube-state-metrics-0\" (UID: \"a67de105-d5d3-48f3-a642-ba7be3dc0920\") " pod="openstack/kube-state-metrics-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.238092 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trl95\" (UniqueName: \"kubernetes.io/projected/4dee8248-9605-46f3-8062-979106e8f5f1-kube-api-access-trl95\") pod \"nova-metadata-0\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.340091 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a67de105-d5d3-48f3-a642-ba7be3dc0920-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a67de105-d5d3-48f3-a642-ba7be3dc0920\") " pod="openstack/kube-state-metrics-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.340538 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67de105-d5d3-48f3-a642-ba7be3dc0920-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a67de105-d5d3-48f3-a642-ba7be3dc0920\") " pod="openstack/kube-state-metrics-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.340632 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dee8248-9605-46f3-8062-979106e8f5f1-logs\") pod \"nova-metadata-0\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.340719 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-config-data\") pod \"nova-metadata-0\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.340789 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.340891 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5nhx\" (UniqueName: \"kubernetes.io/projected/a67de105-d5d3-48f3-a642-ba7be3dc0920-kube-api-access-p5nhx\") pod \"kube-state-metrics-0\" (UID: \"a67de105-d5d3-48f3-a642-ba7be3dc0920\") " pod="openstack/kube-state-metrics-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.341125 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trl95\" (UniqueName: \"kubernetes.io/projected/4dee8248-9605-46f3-8062-979106e8f5f1-kube-api-access-trl95\") pod \"nova-metadata-0\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.341204 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.341230 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a67de105-d5d3-48f3-a642-ba7be3dc0920-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a67de105-d5d3-48f3-a642-ba7be3dc0920\") " pod="openstack/kube-state-metrics-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.342020 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dee8248-9605-46f3-8062-979106e8f5f1-logs\") pod \"nova-metadata-0\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.347033 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-config-data\") pod \"nova-metadata-0\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.349606 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67de105-d5d3-48f3-a642-ba7be3dc0920-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a67de105-d5d3-48f3-a642-ba7be3dc0920\") " pod="openstack/kube-state-metrics-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.350225 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.358321 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.362093 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a67de105-d5d3-48f3-a642-ba7be3dc0920-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a67de105-d5d3-48f3-a642-ba7be3dc0920\") " pod="openstack/kube-state-metrics-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.363217 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5nhx\" (UniqueName: \"kubernetes.io/projected/a67de105-d5d3-48f3-a642-ba7be3dc0920-kube-api-access-p5nhx\") pod \"kube-state-metrics-0\" (UID: \"a67de105-d5d3-48f3-a642-ba7be3dc0920\") " pod="openstack/kube-state-metrics-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.365686 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trl95\" (UniqueName: \"kubernetes.io/projected/4dee8248-9605-46f3-8062-979106e8f5f1-kube-api-access-trl95\") pod \"nova-metadata-0\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.365975 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a67de105-d5d3-48f3-a642-ba7be3dc0920-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a67de105-d5d3-48f3-a642-ba7be3dc0920\") " pod="openstack/kube-state-metrics-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.448383 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.481449 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.973774 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:07:25 crc kubenswrapper[4914]: W0127 14:07:25.990188 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dee8248_9605_46f3_8062_979106e8f5f1.slice/crio-83f2b17e6dd912a7e3dfefec8f371f30d557a4dedc42a9b94b7d65fb46450b9c WatchSource:0}: Error finding container 83f2b17e6dd912a7e3dfefec8f371f30d557a4dedc42a9b94b7d65fb46450b9c: Status 404 returned error can't find the container with id 83f2b17e6dd912a7e3dfefec8f371f30d557a4dedc42a9b94b7d65fb46450b9c Jan 27 14:07:25 crc kubenswrapper[4914]: I0127 14:07:25.992020 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:07:26 crc kubenswrapper[4914]: I0127 14:07:26.014870 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dee8248-9605-46f3-8062-979106e8f5f1","Type":"ContainerStarted","Data":"83f2b17e6dd912a7e3dfefec8f371f30d557a4dedc42a9b94b7d65fb46450b9c"} Jan 27 14:07:26 crc kubenswrapper[4914]: I0127 14:07:26.017380 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a67de105-d5d3-48f3-a642-ba7be3dc0920","Type":"ContainerStarted","Data":"1772d3baebf4fec95b06de64c8e3d9970803198be39508edc83cd020c0fbd874"} Jan 27 14:07:26 crc kubenswrapper[4914]: I0127 14:07:26.117909 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:07:26 crc kubenswrapper[4914]: I0127 14:07:26.118352 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerName="proxy-httpd" containerID="cri-o://4a517bb0cb845a939cf1f2f21c49f3e50ba43c0141c80c2ad66e583e11cad3ae" gracePeriod=30 Jan 27 14:07:26 crc kubenswrapper[4914]: I0127 14:07:26.118355 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerName="ceilometer-notification-agent" containerID="cri-o://2bde8d36ec54c063fb58db8839a0deb3f66be93a6fe1d2555cf4f5eba33e84b3" gracePeriod=30 Jan 27 14:07:26 crc kubenswrapper[4914]: I0127 14:07:26.118216 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerName="ceilometer-central-agent" containerID="cri-o://8671d160ed2b7d759efd880eab6e024b910a3314b90142ccf06949d0ac922e63" gracePeriod=30 Jan 27 14:07:26 crc kubenswrapper[4914]: I0127 14:07:26.118365 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerName="sg-core" containerID="cri-o://fa8842a92a6f55830ed93efc47ca2b83186c99f714cfbb125e4a239dc202dcef" gracePeriod=30 Jan 27 14:07:26 crc kubenswrapper[4914]: I0127 14:07:26.314886 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a64463da-43d7-4c6b-819f-b7dd45a891d0" path="/var/lib/kubelet/pods/a64463da-43d7-4c6b-819f-b7dd45a891d0/volumes" Jan 27 14:07:26 crc kubenswrapper[4914]: I0127 14:07:26.317012 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc356ffd-559b-403b-8f0f-8bb7518dd9b7" path="/var/lib/kubelet/pods/cc356ffd-559b-403b-8f0f-8bb7518dd9b7/volumes" Jan 27 14:07:27 crc kubenswrapper[4914]: I0127 14:07:27.032671 4914 generic.go:334] "Generic (PLEG): container finished" podID="3b3f7735-ffe2-40bc-9055-67f89a4a3a95" containerID="4fab63ecbb2167d4aa10709d657a7afa1815e139ebae6fe7d1cb9e61155c09df" exitCode=0 Jan 27 14:07:27 crc kubenswrapper[4914]: I0127 14:07:27.032770 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hvsxg" event={"ID":"3b3f7735-ffe2-40bc-9055-67f89a4a3a95","Type":"ContainerDied","Data":"4fab63ecbb2167d4aa10709d657a7afa1815e139ebae6fe7d1cb9e61155c09df"} Jan 27 14:07:27 crc kubenswrapper[4914]: I0127 14:07:27.036801 4914 generic.go:334] "Generic (PLEG): container finished" podID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerID="4a517bb0cb845a939cf1f2f21c49f3e50ba43c0141c80c2ad66e583e11cad3ae" exitCode=0 Jan 27 14:07:27 crc kubenswrapper[4914]: I0127 14:07:27.036967 4914 generic.go:334] "Generic (PLEG): container finished" podID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerID="fa8842a92a6f55830ed93efc47ca2b83186c99f714cfbb125e4a239dc202dcef" exitCode=2 Jan 27 14:07:27 crc kubenswrapper[4914]: I0127 14:07:27.037091 4914 generic.go:334] "Generic (PLEG): container finished" podID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerID="8671d160ed2b7d759efd880eab6e024b910a3314b90142ccf06949d0ac922e63" exitCode=0 Jan 27 14:07:27 crc kubenswrapper[4914]: I0127 14:07:27.036863 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16d38f9-5614-4637-a98f-9c47190ccff4","Type":"ContainerDied","Data":"4a517bb0cb845a939cf1f2f21c49f3e50ba43c0141c80c2ad66e583e11cad3ae"} Jan 27 14:07:27 crc kubenswrapper[4914]: I0127 14:07:27.037319 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16d38f9-5614-4637-a98f-9c47190ccff4","Type":"ContainerDied","Data":"fa8842a92a6f55830ed93efc47ca2b83186c99f714cfbb125e4a239dc202dcef"} Jan 27 14:07:27 crc kubenswrapper[4914]: I0127 14:07:27.037422 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16d38f9-5614-4637-a98f-9c47190ccff4","Type":"ContainerDied","Data":"8671d160ed2b7d759efd880eab6e024b910a3314b90142ccf06949d0ac922e63"} Jan 27 14:07:27 crc kubenswrapper[4914]: I0127 14:07:27.039483 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dee8248-9605-46f3-8062-979106e8f5f1","Type":"ContainerStarted","Data":"b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123"} Jan 27 14:07:27 crc kubenswrapper[4914]: I0127 14:07:27.039520 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dee8248-9605-46f3-8062-979106e8f5f1","Type":"ContainerStarted","Data":"771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4"} Jan 27 14:07:27 crc kubenswrapper[4914]: I0127 14:07:27.041084 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a67de105-d5d3-48f3-a642-ba7be3dc0920","Type":"ContainerStarted","Data":"8368baf4f955f1c66c0fc6e9c17ac096e9fcb766c5785dc67950dd1da1d530f8"} Jan 27 14:07:27 crc kubenswrapper[4914]: I0127 14:07:27.041267 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 14:07:27 crc kubenswrapper[4914]: I0127 14:07:27.074132 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.666874295 podStartE2EDuration="2.07411184s" podCreationTimestamp="2026-01-27 14:07:25 +0000 UTC" firstStartedPulling="2026-01-27 14:07:25.982120073 +0000 UTC m=+1404.294470178" lastFinishedPulling="2026-01-27 14:07:26.389357638 +0000 UTC m=+1404.701707723" observedRunningTime="2026-01-27 14:07:27.064215788 +0000 UTC m=+1405.376565873" watchObservedRunningTime="2026-01-27 14:07:27.07411184 +0000 UTC m=+1405.386461925" Jan 27 14:07:27 crc kubenswrapper[4914]: I0127 14:07:27.088264 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.088243768 podStartE2EDuration="2.088243768s" podCreationTimestamp="2026-01-27 14:07:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:27.083109097 +0000 UTC m=+1405.395459182" watchObservedRunningTime="2026-01-27 14:07:27.088243768 +0000 UTC m=+1405.400593853" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.354733 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.355075 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.405472 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.431304 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hvsxg" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.501388 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jv5w\" (UniqueName: \"kubernetes.io/projected/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-kube-api-access-6jv5w\") pod \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\" (UID: \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\") " Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.501442 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-combined-ca-bundle\") pod \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\" (UID: \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\") " Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.501516 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-scripts\") pod \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\" (UID: \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\") " Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.501551 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-config-data\") pod \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\" (UID: \"3b3f7735-ffe2-40bc-9055-67f89a4a3a95\") " Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.507279 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-scripts" (OuterVolumeSpecName: "scripts") pod "3b3f7735-ffe2-40bc-9055-67f89a4a3a95" (UID: "3b3f7735-ffe2-40bc-9055-67f89a4a3a95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.527166 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-kube-api-access-6jv5w" (OuterVolumeSpecName: "kube-api-access-6jv5w") pod "3b3f7735-ffe2-40bc-9055-67f89a4a3a95" (UID: "3b3f7735-ffe2-40bc-9055-67f89a4a3a95"). InnerVolumeSpecName "kube-api-access-6jv5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.533070 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-config-data" (OuterVolumeSpecName: "config-data") pod "3b3f7735-ffe2-40bc-9055-67f89a4a3a95" (UID: "3b3f7735-ffe2-40bc-9055-67f89a4a3a95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.534040 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b3f7735-ffe2-40bc-9055-67f89a4a3a95" (UID: "3b3f7735-ffe2-40bc-9055-67f89a4a3a95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.604695 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jv5w\" (UniqueName: \"kubernetes.io/projected/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-kube-api-access-6jv5w\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.604752 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.604766 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.604777 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b3f7735-ffe2-40bc-9055-67f89a4a3a95-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.657698 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.657805 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.694391 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.747591 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.845411 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-drd5h"] Jan 27 14:07:28 crc kubenswrapper[4914]: I0127 14:07:28.845681 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" podUID="6cb7fe78-4222-4e97-9b06-62e24491812a" containerName="dnsmasq-dns" containerID="cri-o://cd66fdab56486629a9804f6a1bf54a8d1f0c8379c0878301a9eb10b735a5eadf" gracePeriod=10 Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.061946 4914 generic.go:334] "Generic (PLEG): container finished" podID="6cb7fe78-4222-4e97-9b06-62e24491812a" containerID="cd66fdab56486629a9804f6a1bf54a8d1f0c8379c0878301a9eb10b735a5eadf" exitCode=0 Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.062002 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" event={"ID":"6cb7fe78-4222-4e97-9b06-62e24491812a","Type":"ContainerDied","Data":"cd66fdab56486629a9804f6a1bf54a8d1f0c8379c0878301a9eb10b735a5eadf"} Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.072114 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hvsxg" event={"ID":"3b3f7735-ffe2-40bc-9055-67f89a4a3a95","Type":"ContainerDied","Data":"d2b37b146423ecbac8e363d1ee68a93c8c9eef95bed128404f80de60579566e3"} Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.072174 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2b37b146423ecbac8e363d1ee68a93c8c9eef95bed128404f80de60579566e3" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.072225 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hvsxg" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.075350 4914 generic.go:334] "Generic (PLEG): container finished" podID="1d67207a-f8f7-4b0d-aa50-be147a8ba810" containerID="7771ca1de9d47c34ff125fa812e479371d69aae563a4f8ff18c5927911fe867c" exitCode=0 Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.075864 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-m6sr7" event={"ID":"1d67207a-f8f7-4b0d-aa50-be147a8ba810","Type":"ContainerDied","Data":"7771ca1de9d47c34ff125fa812e479371d69aae563a4f8ff18c5927911fe867c"} Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.116808 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.286363 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.286614 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4dee8248-9605-46f3-8062-979106e8f5f1" containerName="nova-metadata-log" containerID="cri-o://771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4" gracePeriod=30 Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.286758 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4dee8248-9605-46f3-8062-979106e8f5f1" containerName="nova-metadata-metadata" containerID="cri-o://b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123" gracePeriod=30 Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.300223 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.302376 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="36e5891e-45f7-4898-8172-773b64a58b3a" containerName="nova-api-log" containerID="cri-o://6ad23d27f4a81ff0f796a2c63d93992d9d17cac7fefd9455b2b90c371e9c6256" gracePeriod=30 Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.302844 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="36e5891e-45f7-4898-8172-773b64a58b3a" containerName="nova-api-api" containerID="cri-o://83ee45e6c3e1df1a7a7d8c146c2b4685b950eb93db9359556e1044d3c3cf2b35" gracePeriod=30 Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.309491 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="36e5891e-45f7-4898-8172-773b64a58b3a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": EOF" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.309862 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="36e5891e-45f7-4898-8172-773b64a58b3a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": EOF" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.331351 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.431946 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-dns-svc\") pod \"6cb7fe78-4222-4e97-9b06-62e24491812a\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.432004 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-dns-swift-storage-0\") pod \"6cb7fe78-4222-4e97-9b06-62e24491812a\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.432044 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-config\") pod \"6cb7fe78-4222-4e97-9b06-62e24491812a\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.432084 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-ovsdbserver-nb\") pod \"6cb7fe78-4222-4e97-9b06-62e24491812a\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.432269 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-ovsdbserver-sb\") pod \"6cb7fe78-4222-4e97-9b06-62e24491812a\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.432333 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6fgs\" (UniqueName: \"kubernetes.io/projected/6cb7fe78-4222-4e97-9b06-62e24491812a-kube-api-access-q6fgs\") pod \"6cb7fe78-4222-4e97-9b06-62e24491812a\" (UID: \"6cb7fe78-4222-4e97-9b06-62e24491812a\") " Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.442476 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cb7fe78-4222-4e97-9b06-62e24491812a-kube-api-access-q6fgs" (OuterVolumeSpecName: "kube-api-access-q6fgs") pod "6cb7fe78-4222-4e97-9b06-62e24491812a" (UID: "6cb7fe78-4222-4e97-9b06-62e24491812a"). InnerVolumeSpecName "kube-api-access-q6fgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.502225 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6cb7fe78-4222-4e97-9b06-62e24491812a" (UID: "6cb7fe78-4222-4e97-9b06-62e24491812a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.513703 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6cb7fe78-4222-4e97-9b06-62e24491812a" (UID: "6cb7fe78-4222-4e97-9b06-62e24491812a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.517771 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-config" (OuterVolumeSpecName: "config") pod "6cb7fe78-4222-4e97-9b06-62e24491812a" (UID: "6cb7fe78-4222-4e97-9b06-62e24491812a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.535760 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6cb7fe78-4222-4e97-9b06-62e24491812a" (UID: "6cb7fe78-4222-4e97-9b06-62e24491812a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.539121 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.539182 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.539196 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6fgs\" (UniqueName: \"kubernetes.io/projected/6cb7fe78-4222-4e97-9b06-62e24491812a-kube-api-access-q6fgs\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.539211 4914 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.539222 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.542365 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6cb7fe78-4222-4e97-9b06-62e24491812a" (UID: "6cb7fe78-4222-4e97-9b06-62e24491812a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.642275 4914 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cb7fe78-4222-4e97-9b06-62e24491812a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.644582 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:07:29 crc kubenswrapper[4914]: I0127 14:07:29.883950 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.049563 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-config-data\") pod \"4dee8248-9605-46f3-8062-979106e8f5f1\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.049627 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trl95\" (UniqueName: \"kubernetes.io/projected/4dee8248-9605-46f3-8062-979106e8f5f1-kube-api-access-trl95\") pod \"4dee8248-9605-46f3-8062-979106e8f5f1\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.049693 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-nova-metadata-tls-certs\") pod \"4dee8248-9605-46f3-8062-979106e8f5f1\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.049737 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-combined-ca-bundle\") pod \"4dee8248-9605-46f3-8062-979106e8f5f1\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.049797 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dee8248-9605-46f3-8062-979106e8f5f1-logs\") pod \"4dee8248-9605-46f3-8062-979106e8f5f1\" (UID: \"4dee8248-9605-46f3-8062-979106e8f5f1\") " Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.050628 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dee8248-9605-46f3-8062-979106e8f5f1-logs" (OuterVolumeSpecName: "logs") pod "4dee8248-9605-46f3-8062-979106e8f5f1" (UID: "4dee8248-9605-46f3-8062-979106e8f5f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.058022 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dee8248-9605-46f3-8062-979106e8f5f1-kube-api-access-trl95" (OuterVolumeSpecName: "kube-api-access-trl95") pod "4dee8248-9605-46f3-8062-979106e8f5f1" (UID: "4dee8248-9605-46f3-8062-979106e8f5f1"). InnerVolumeSpecName "kube-api-access-trl95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.076654 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4dee8248-9605-46f3-8062-979106e8f5f1" (UID: "4dee8248-9605-46f3-8062-979106e8f5f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.078727 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-config-data" (OuterVolumeSpecName: "config-data") pod "4dee8248-9605-46f3-8062-979106e8f5f1" (UID: "4dee8248-9605-46f3-8062-979106e8f5f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.086280 4914 generic.go:334] "Generic (PLEG): container finished" podID="36e5891e-45f7-4898-8172-773b64a58b3a" containerID="6ad23d27f4a81ff0f796a2c63d93992d9d17cac7fefd9455b2b90c371e9c6256" exitCode=143 Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.086938 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36e5891e-45f7-4898-8172-773b64a58b3a","Type":"ContainerDied","Data":"6ad23d27f4a81ff0f796a2c63d93992d9d17cac7fefd9455b2b90c371e9c6256"} Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.088369 4914 generic.go:334] "Generic (PLEG): container finished" podID="4dee8248-9605-46f3-8062-979106e8f5f1" containerID="b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123" exitCode=0 Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.088389 4914 generic.go:334] "Generic (PLEG): container finished" podID="4dee8248-9605-46f3-8062-979106e8f5f1" containerID="771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4" exitCode=143 Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.088443 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dee8248-9605-46f3-8062-979106e8f5f1","Type":"ContainerDied","Data":"b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123"} Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.088464 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.088480 4914 scope.go:117] "RemoveContainer" containerID="b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.088464 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dee8248-9605-46f3-8062-979106e8f5f1","Type":"ContainerDied","Data":"771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4"} Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.088603 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dee8248-9605-46f3-8062-979106e8f5f1","Type":"ContainerDied","Data":"83f2b17e6dd912a7e3dfefec8f371f30d557a4dedc42a9b94b7d65fb46450b9c"} Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.094180 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" event={"ID":"6cb7fe78-4222-4e97-9b06-62e24491812a","Type":"ContainerDied","Data":"cc647c470ad9e612c519cf7d9cfff299319b843f30cd275cdccece4181fe1210"} Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.094199 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-drd5h" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.118969 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4dee8248-9605-46f3-8062-979106e8f5f1" (UID: "4dee8248-9605-46f3-8062-979106e8f5f1"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.136964 4914 scope.go:117] "RemoveContainer" containerID="771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.147508 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-drd5h"] Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.154185 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.154228 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trl95\" (UniqueName: \"kubernetes.io/projected/4dee8248-9605-46f3-8062-979106e8f5f1-kube-api-access-trl95\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.154241 4914 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.154253 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dee8248-9605-46f3-8062-979106e8f5f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.154265 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dee8248-9605-46f3-8062-979106e8f5f1-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.157148 4914 scope.go:117] "RemoveContainer" containerID="b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123" Jan 27 14:07:30 crc kubenswrapper[4914]: E0127 14:07:30.159081 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123\": container with ID starting with b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123 not found: ID does not exist" containerID="b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.159207 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123"} err="failed to get container status \"b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123\": rpc error: code = NotFound desc = could not find container \"b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123\": container with ID starting with b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123 not found: ID does not exist" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.159326 4914 scope.go:117] "RemoveContainer" containerID="771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.159486 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-drd5h"] Jan 27 14:07:30 crc kubenswrapper[4914]: E0127 14:07:30.160549 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4\": container with ID starting with 771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4 not found: ID does not exist" containerID="771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.160575 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4"} err="failed to get container status \"771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4\": rpc error: code = NotFound desc = could not find container \"771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4\": container with ID starting with 771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4 not found: ID does not exist" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.160593 4914 scope.go:117] "RemoveContainer" containerID="b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.160967 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123"} err="failed to get container status \"b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123\": rpc error: code = NotFound desc = could not find container \"b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123\": container with ID starting with b3aa57c1c876be9bd52811c18c8680522557ea2c23368d5f55acf675a3b34123 not found: ID does not exist" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.160996 4914 scope.go:117] "RemoveContainer" containerID="771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.163018 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4"} err="failed to get container status \"771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4\": rpc error: code = NotFound desc = could not find container \"771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4\": container with ID starting with 771e41183ba3c5043dd927f10d306e8792eac8f3e09792d8779dfd9449a412f4 not found: ID does not exist" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.163039 4914 scope.go:117] "RemoveContainer" containerID="cd66fdab56486629a9804f6a1bf54a8d1f0c8379c0878301a9eb10b735a5eadf" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.183047 4914 scope.go:117] "RemoveContainer" containerID="3e8f3e5eb47a182dceeb12a2ac6194eef2238e658ebcbe8dae48ceb4d691ed56" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.316372 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cb7fe78-4222-4e97-9b06-62e24491812a" path="/var/lib/kubelet/pods/6cb7fe78-4222-4e97-9b06-62e24491812a/volumes" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.420573 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.442011 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.449328 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:07:30 crc kubenswrapper[4914]: E0127 14:07:30.450315 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3f7735-ffe2-40bc-9055-67f89a4a3a95" containerName="nova-manage" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.450425 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3f7735-ffe2-40bc-9055-67f89a4a3a95" containerName="nova-manage" Jan 27 14:07:30 crc kubenswrapper[4914]: E0127 14:07:30.450504 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb7fe78-4222-4e97-9b06-62e24491812a" containerName="init" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.450612 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb7fe78-4222-4e97-9b06-62e24491812a" containerName="init" Jan 27 14:07:30 crc kubenswrapper[4914]: E0127 14:07:30.450889 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb7fe78-4222-4e97-9b06-62e24491812a" containerName="dnsmasq-dns" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.450969 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb7fe78-4222-4e97-9b06-62e24491812a" containerName="dnsmasq-dns" Jan 27 14:07:30 crc kubenswrapper[4914]: E0127 14:07:30.451044 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dee8248-9605-46f3-8062-979106e8f5f1" containerName="nova-metadata-metadata" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.451133 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dee8248-9605-46f3-8062-979106e8f5f1" containerName="nova-metadata-metadata" Jan 27 14:07:30 crc kubenswrapper[4914]: E0127 14:07:30.451223 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dee8248-9605-46f3-8062-979106e8f5f1" containerName="nova-metadata-log" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.451358 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dee8248-9605-46f3-8062-979106e8f5f1" containerName="nova-metadata-log" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.451719 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dee8248-9605-46f3-8062-979106e8f5f1" containerName="nova-metadata-metadata" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.451817 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dee8248-9605-46f3-8062-979106e8f5f1" containerName="nova-metadata-log" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.451933 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cb7fe78-4222-4e97-9b06-62e24491812a" containerName="dnsmasq-dns" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.452019 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3f7735-ffe2-40bc-9055-67f89a4a3a95" containerName="nova-manage" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.480501 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.484247 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.485883 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.502167 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.545768 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-m6sr7" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.581978 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.582132 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f18bf8b-0605-4134-9164-6b260b45d655-logs\") pod \"nova-metadata-0\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.582353 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-config-data\") pod \"nova-metadata-0\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.582369 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.582443 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd7m4\" (UniqueName: \"kubernetes.io/projected/1f18bf8b-0605-4134-9164-6b260b45d655-kube-api-access-rd7m4\") pod \"nova-metadata-0\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.683709 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-config-data\") pod \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\" (UID: \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\") " Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.683771 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-combined-ca-bundle\") pod \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\" (UID: \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\") " Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.683959 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdf8q\" (UniqueName: \"kubernetes.io/projected/1d67207a-f8f7-4b0d-aa50-be147a8ba810-kube-api-access-rdf8q\") pod \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\" (UID: \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\") " Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.684057 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-scripts\") pod \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\" (UID: \"1d67207a-f8f7-4b0d-aa50-be147a8ba810\") " Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.684392 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f18bf8b-0605-4134-9164-6b260b45d655-logs\") pod \"nova-metadata-0\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.684614 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-config-data\") pod \"nova-metadata-0\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.684827 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f18bf8b-0605-4134-9164-6b260b45d655-logs\") pod \"nova-metadata-0\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.684645 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.685278 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd7m4\" (UniqueName: \"kubernetes.io/projected/1f18bf8b-0605-4134-9164-6b260b45d655-kube-api-access-rd7m4\") pod \"nova-metadata-0\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.685439 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.688417 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-config-data\") pod \"nova-metadata-0\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.689335 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d67207a-f8f7-4b0d-aa50-be147a8ba810-kube-api-access-rdf8q" (OuterVolumeSpecName: "kube-api-access-rdf8q") pod "1d67207a-f8f7-4b0d-aa50-be147a8ba810" (UID: "1d67207a-f8f7-4b0d-aa50-be147a8ba810"). InnerVolumeSpecName "kube-api-access-rdf8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.691620 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.698538 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-scripts" (OuterVolumeSpecName: "scripts") pod "1d67207a-f8f7-4b0d-aa50-be147a8ba810" (UID: "1d67207a-f8f7-4b0d-aa50-be147a8ba810"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.699555 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.706542 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd7m4\" (UniqueName: \"kubernetes.io/projected/1f18bf8b-0605-4134-9164-6b260b45d655-kube-api-access-rd7m4\") pod \"nova-metadata-0\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " pod="openstack/nova-metadata-0" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.714903 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d67207a-f8f7-4b0d-aa50-be147a8ba810" (UID: "1d67207a-f8f7-4b0d-aa50-be147a8ba810"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.716453 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-config-data" (OuterVolumeSpecName: "config-data") pod "1d67207a-f8f7-4b0d-aa50-be147a8ba810" (UID: "1d67207a-f8f7-4b0d-aa50-be147a8ba810"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.786926 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.786956 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.786967 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d67207a-f8f7-4b0d-aa50-be147a8ba810-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.786978 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdf8q\" (UniqueName: \"kubernetes.io/projected/1d67207a-f8f7-4b0d-aa50-be147a8ba810-kube-api-access-rdf8q\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:30 crc kubenswrapper[4914]: I0127 14:07:30.866921 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.118920 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-m6sr7" event={"ID":"1d67207a-f8f7-4b0d-aa50-be147a8ba810","Type":"ContainerDied","Data":"b06fd81dd6b136cfdd40cd27af90814a8e71c9b966659ba8b754574cc5ed436c"} Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.119201 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b06fd81dd6b136cfdd40cd27af90814a8e71c9b966659ba8b754574cc5ed436c" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.119296 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-m6sr7" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.132468 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8b24ff69-cfa5-4f60-a771-81dd0abda624" containerName="nova-scheduler-scheduler" containerID="cri-o://706aa5bb3437d8c68ea8dd9b787a21ad8d21686bd2e751e75a7ee53ec131a443" gracePeriod=30 Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.185645 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:07:31 crc kubenswrapper[4914]: E0127 14:07:31.186104 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d67207a-f8f7-4b0d-aa50-be147a8ba810" containerName="nova-cell1-conductor-db-sync" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.186125 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d67207a-f8f7-4b0d-aa50-be147a8ba810" containerName="nova-cell1-conductor-db-sync" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.186306 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d67207a-f8f7-4b0d-aa50-be147a8ba810" containerName="nova-cell1-conductor-db-sync" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.187228 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.189071 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.208928 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.298085 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6038b9a-0f9f-4457-abd7-e4c71ef50128-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d6038b9a-0f9f-4457-abd7-e4c71ef50128\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.298165 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6038b9a-0f9f-4457-abd7-e4c71ef50128-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d6038b9a-0f9f-4457-abd7-e4c71ef50128\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.298309 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv4gf\" (UniqueName: \"kubernetes.io/projected/d6038b9a-0f9f-4457-abd7-e4c71ef50128-kube-api-access-pv4gf\") pod \"nova-cell1-conductor-0\" (UID: \"d6038b9a-0f9f-4457-abd7-e4c71ef50128\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.400025 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv4gf\" (UniqueName: \"kubernetes.io/projected/d6038b9a-0f9f-4457-abd7-e4c71ef50128-kube-api-access-pv4gf\") pod \"nova-cell1-conductor-0\" (UID: \"d6038b9a-0f9f-4457-abd7-e4c71ef50128\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.400095 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6038b9a-0f9f-4457-abd7-e4c71ef50128-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d6038b9a-0f9f-4457-abd7-e4c71ef50128\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.400201 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6038b9a-0f9f-4457-abd7-e4c71ef50128-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d6038b9a-0f9f-4457-abd7-e4c71ef50128\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.407493 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6038b9a-0f9f-4457-abd7-e4c71ef50128-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d6038b9a-0f9f-4457-abd7-e4c71ef50128\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.407987 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6038b9a-0f9f-4457-abd7-e4c71ef50128-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d6038b9a-0f9f-4457-abd7-e4c71ef50128\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.409939 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:07:31 crc kubenswrapper[4914]: W0127 14:07:31.415842 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f18bf8b_0605_4134_9164_6b260b45d655.slice/crio-e902e3ecd6a842f33b6591c675545515e060f269fe93944f8a917e482702bedf WatchSource:0}: Error finding container e902e3ecd6a842f33b6591c675545515e060f269fe93944f8a917e482702bedf: Status 404 returned error can't find the container with id e902e3ecd6a842f33b6591c675545515e060f269fe93944f8a917e482702bedf Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.419781 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv4gf\" (UniqueName: \"kubernetes.io/projected/d6038b9a-0f9f-4457-abd7-e4c71ef50128-kube-api-access-pv4gf\") pod \"nova-cell1-conductor-0\" (UID: \"d6038b9a-0f9f-4457-abd7-e4c71ef50128\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.513186 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 14:07:31 crc kubenswrapper[4914]: I0127 14:07:31.995248 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:07:32 crc kubenswrapper[4914]: W0127 14:07:32.002146 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6038b9a_0f9f_4457_abd7_e4c71ef50128.slice/crio-60c0451478eb1cd26edf775a7799c2a4a885c788345371d490c736b4cc9da227 WatchSource:0}: Error finding container 60c0451478eb1cd26edf775a7799c2a4a885c788345371d490c736b4cc9da227: Status 404 returned error can't find the container with id 60c0451478eb1cd26edf775a7799c2a4a885c788345371d490c736b4cc9da227 Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.142367 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d6038b9a-0f9f-4457-abd7-e4c71ef50128","Type":"ContainerStarted","Data":"60c0451478eb1cd26edf775a7799c2a4a885c788345371d490c736b4cc9da227"} Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.144463 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f18bf8b-0605-4134-9164-6b260b45d655","Type":"ContainerStarted","Data":"d2c90702edddcfa7e140951a3eb81347b66cf038b59705282e65abafb3f7f7de"} Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.144506 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f18bf8b-0605-4134-9164-6b260b45d655","Type":"ContainerStarted","Data":"1a1ae5ea7ef1d93ee16288df8fa20c5c933aead4a3285658c76915e7246df314"} Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.144517 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f18bf8b-0605-4134-9164-6b260b45d655","Type":"ContainerStarted","Data":"e902e3ecd6a842f33b6591c675545515e060f269fe93944f8a917e482702bedf"} Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.172068 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.172041831 podStartE2EDuration="2.172041831s" podCreationTimestamp="2026-01-27 14:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:32.161688198 +0000 UTC m=+1410.474038283" watchObservedRunningTime="2026-01-27 14:07:32.172041831 +0000 UTC m=+1410.484391926" Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.312310 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dee8248-9605-46f3-8062-979106e8f5f1" path="/var/lib/kubelet/pods/4dee8248-9605-46f3-8062-979106e8f5f1/volumes" Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.915666 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.940316 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g2rz\" (UniqueName: \"kubernetes.io/projected/b16d38f9-5614-4637-a98f-9c47190ccff4-kube-api-access-9g2rz\") pod \"b16d38f9-5614-4637-a98f-9c47190ccff4\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.940419 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-sg-core-conf-yaml\") pod \"b16d38f9-5614-4637-a98f-9c47190ccff4\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.940552 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-config-data\") pod \"b16d38f9-5614-4637-a98f-9c47190ccff4\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.940603 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-combined-ca-bundle\") pod \"b16d38f9-5614-4637-a98f-9c47190ccff4\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.940682 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-scripts\") pod \"b16d38f9-5614-4637-a98f-9c47190ccff4\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.940709 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16d38f9-5614-4637-a98f-9c47190ccff4-run-httpd\") pod \"b16d38f9-5614-4637-a98f-9c47190ccff4\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.941682 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b16d38f9-5614-4637-a98f-9c47190ccff4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b16d38f9-5614-4637-a98f-9c47190ccff4" (UID: "b16d38f9-5614-4637-a98f-9c47190ccff4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.941737 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16d38f9-5614-4637-a98f-9c47190ccff4-log-httpd\") pod \"b16d38f9-5614-4637-a98f-9c47190ccff4\" (UID: \"b16d38f9-5614-4637-a98f-9c47190ccff4\") " Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.941741 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b16d38f9-5614-4637-a98f-9c47190ccff4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b16d38f9-5614-4637-a98f-9c47190ccff4" (UID: "b16d38f9-5614-4637-a98f-9c47190ccff4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.944458 4914 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16d38f9-5614-4637-a98f-9c47190ccff4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.944490 4914 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16d38f9-5614-4637-a98f-9c47190ccff4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.962530 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b16d38f9-5614-4637-a98f-9c47190ccff4-kube-api-access-9g2rz" (OuterVolumeSpecName: "kube-api-access-9g2rz") pod "b16d38f9-5614-4637-a98f-9c47190ccff4" (UID: "b16d38f9-5614-4637-a98f-9c47190ccff4"). InnerVolumeSpecName "kube-api-access-9g2rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.971518 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-scripts" (OuterVolumeSpecName: "scripts") pod "b16d38f9-5614-4637-a98f-9c47190ccff4" (UID: "b16d38f9-5614-4637-a98f-9c47190ccff4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:32 crc kubenswrapper[4914]: I0127 14:07:32.976182 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b16d38f9-5614-4637-a98f-9c47190ccff4" (UID: "b16d38f9-5614-4637-a98f-9c47190ccff4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.046235 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.046282 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g2rz\" (UniqueName: \"kubernetes.io/projected/b16d38f9-5614-4637-a98f-9c47190ccff4-kube-api-access-9g2rz\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.046292 4914 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.065361 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b16d38f9-5614-4637-a98f-9c47190ccff4" (UID: "b16d38f9-5614-4637-a98f-9c47190ccff4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.093617 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-config-data" (OuterVolumeSpecName: "config-data") pod "b16d38f9-5614-4637-a98f-9c47190ccff4" (UID: "b16d38f9-5614-4637-a98f-9c47190ccff4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.147655 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.147691 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16d38f9-5614-4637-a98f-9c47190ccff4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.154274 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d6038b9a-0f9f-4457-abd7-e4c71ef50128","Type":"ContainerStarted","Data":"7843a6997aac1d591965205b9fb55a0a9226bc9362b6b28aa2082b40f5339d96"} Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.154391 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.157916 4914 generic.go:334] "Generic (PLEG): container finished" podID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerID="2bde8d36ec54c063fb58db8839a0deb3f66be93a6fe1d2555cf4f5eba33e84b3" exitCode=0 Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.157991 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.158009 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16d38f9-5614-4637-a98f-9c47190ccff4","Type":"ContainerDied","Data":"2bde8d36ec54c063fb58db8839a0deb3f66be93a6fe1d2555cf4f5eba33e84b3"} Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.158348 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16d38f9-5614-4637-a98f-9c47190ccff4","Type":"ContainerDied","Data":"7414e6be5338e286698b0fdf1254bb4bed0a5f7d349a0a1bf7197fd0862001ab"} Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.158374 4914 scope.go:117] "RemoveContainer" containerID="4a517bb0cb845a939cf1f2f21c49f3e50ba43c0141c80c2ad66e583e11cad3ae" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.197456 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.19740571 podStartE2EDuration="2.19740571s" podCreationTimestamp="2026-01-27 14:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:33.169327339 +0000 UTC m=+1411.481677424" watchObservedRunningTime="2026-01-27 14:07:33.19740571 +0000 UTC m=+1411.509755795" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.207389 4914 scope.go:117] "RemoveContainer" containerID="fa8842a92a6f55830ed93efc47ca2b83186c99f714cfbb125e4a239dc202dcef" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.224412 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.239303 4914 scope.go:117] "RemoveContainer" containerID="2bde8d36ec54c063fb58db8839a0deb3f66be93a6fe1d2555cf4f5eba33e84b3" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.239686 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.251310 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:07:33 crc kubenswrapper[4914]: E0127 14:07:33.252020 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerName="proxy-httpd" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.252046 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerName="proxy-httpd" Jan 27 14:07:33 crc kubenswrapper[4914]: E0127 14:07:33.252091 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerName="sg-core" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.252098 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerName="sg-core" Jan 27 14:07:33 crc kubenswrapper[4914]: E0127 14:07:33.252113 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerName="ceilometer-central-agent" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.252120 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerName="ceilometer-central-agent" Jan 27 14:07:33 crc kubenswrapper[4914]: E0127 14:07:33.252129 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerName="ceilometer-notification-agent" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.252135 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerName="ceilometer-notification-agent" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.261058 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerName="ceilometer-central-agent" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.261101 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerName="proxy-httpd" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.261129 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerName="ceilometer-notification-agent" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.261142 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" containerName="sg-core" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.264304 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.264460 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.267153 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.267407 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.267425 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.273886 4914 scope.go:117] "RemoveContainer" containerID="8671d160ed2b7d759efd880eab6e024b910a3314b90142ccf06949d0ac922e63" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.295227 4914 scope.go:117] "RemoveContainer" containerID="4a517bb0cb845a939cf1f2f21c49f3e50ba43c0141c80c2ad66e583e11cad3ae" Jan 27 14:07:33 crc kubenswrapper[4914]: E0127 14:07:33.296249 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a517bb0cb845a939cf1f2f21c49f3e50ba43c0141c80c2ad66e583e11cad3ae\": container with ID starting with 4a517bb0cb845a939cf1f2f21c49f3e50ba43c0141c80c2ad66e583e11cad3ae not found: ID does not exist" containerID="4a517bb0cb845a939cf1f2f21c49f3e50ba43c0141c80c2ad66e583e11cad3ae" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.296304 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a517bb0cb845a939cf1f2f21c49f3e50ba43c0141c80c2ad66e583e11cad3ae"} err="failed to get container status \"4a517bb0cb845a939cf1f2f21c49f3e50ba43c0141c80c2ad66e583e11cad3ae\": rpc error: code = NotFound desc = could not find container \"4a517bb0cb845a939cf1f2f21c49f3e50ba43c0141c80c2ad66e583e11cad3ae\": container with ID starting with 4a517bb0cb845a939cf1f2f21c49f3e50ba43c0141c80c2ad66e583e11cad3ae not found: ID does not exist" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.296348 4914 scope.go:117] "RemoveContainer" containerID="fa8842a92a6f55830ed93efc47ca2b83186c99f714cfbb125e4a239dc202dcef" Jan 27 14:07:33 crc kubenswrapper[4914]: E0127 14:07:33.296670 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa8842a92a6f55830ed93efc47ca2b83186c99f714cfbb125e4a239dc202dcef\": container with ID starting with fa8842a92a6f55830ed93efc47ca2b83186c99f714cfbb125e4a239dc202dcef not found: ID does not exist" containerID="fa8842a92a6f55830ed93efc47ca2b83186c99f714cfbb125e4a239dc202dcef" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.296713 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa8842a92a6f55830ed93efc47ca2b83186c99f714cfbb125e4a239dc202dcef"} err="failed to get container status \"fa8842a92a6f55830ed93efc47ca2b83186c99f714cfbb125e4a239dc202dcef\": rpc error: code = NotFound desc = could not find container \"fa8842a92a6f55830ed93efc47ca2b83186c99f714cfbb125e4a239dc202dcef\": container with ID starting with fa8842a92a6f55830ed93efc47ca2b83186c99f714cfbb125e4a239dc202dcef not found: ID does not exist" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.296743 4914 scope.go:117] "RemoveContainer" containerID="2bde8d36ec54c063fb58db8839a0deb3f66be93a6fe1d2555cf4f5eba33e84b3" Jan 27 14:07:33 crc kubenswrapper[4914]: E0127 14:07:33.297151 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bde8d36ec54c063fb58db8839a0deb3f66be93a6fe1d2555cf4f5eba33e84b3\": container with ID starting with 2bde8d36ec54c063fb58db8839a0deb3f66be93a6fe1d2555cf4f5eba33e84b3 not found: ID does not exist" containerID="2bde8d36ec54c063fb58db8839a0deb3f66be93a6fe1d2555cf4f5eba33e84b3" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.297183 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bde8d36ec54c063fb58db8839a0deb3f66be93a6fe1d2555cf4f5eba33e84b3"} err="failed to get container status \"2bde8d36ec54c063fb58db8839a0deb3f66be93a6fe1d2555cf4f5eba33e84b3\": rpc error: code = NotFound desc = could not find container \"2bde8d36ec54c063fb58db8839a0deb3f66be93a6fe1d2555cf4f5eba33e84b3\": container with ID starting with 2bde8d36ec54c063fb58db8839a0deb3f66be93a6fe1d2555cf4f5eba33e84b3 not found: ID does not exist" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.297204 4914 scope.go:117] "RemoveContainer" containerID="8671d160ed2b7d759efd880eab6e024b910a3314b90142ccf06949d0ac922e63" Jan 27 14:07:33 crc kubenswrapper[4914]: E0127 14:07:33.297575 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8671d160ed2b7d759efd880eab6e024b910a3314b90142ccf06949d0ac922e63\": container with ID starting with 8671d160ed2b7d759efd880eab6e024b910a3314b90142ccf06949d0ac922e63 not found: ID does not exist" containerID="8671d160ed2b7d759efd880eab6e024b910a3314b90142ccf06949d0ac922e63" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.297630 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8671d160ed2b7d759efd880eab6e024b910a3314b90142ccf06949d0ac922e63"} err="failed to get container status \"8671d160ed2b7d759efd880eab6e024b910a3314b90142ccf06949d0ac922e63\": rpc error: code = NotFound desc = could not find container \"8671d160ed2b7d759efd880eab6e024b910a3314b90142ccf06949d0ac922e63\": container with ID starting with 8671d160ed2b7d759efd880eab6e024b910a3314b90142ccf06949d0ac922e63 not found: ID does not exist" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.358868 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.358923 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9ee1d3b-10d2-4b14-a66c-31028d58d293-run-httpd\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.359021 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.359076 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-config-data\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.359095 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9ee1d3b-10d2-4b14-a66c-31028d58d293-log-httpd\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.359116 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwgr2\" (UniqueName: \"kubernetes.io/projected/f9ee1d3b-10d2-4b14-a66c-31028d58d293-kube-api-access-mwgr2\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.359432 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-scripts\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.359535 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.461116 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-scripts\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.461174 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.461219 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.461237 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9ee1d3b-10d2-4b14-a66c-31028d58d293-run-httpd\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.461263 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.461295 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-config-data\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.461311 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9ee1d3b-10d2-4b14-a66c-31028d58d293-log-httpd\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.461328 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwgr2\" (UniqueName: \"kubernetes.io/projected/f9ee1d3b-10d2-4b14-a66c-31028d58d293-kube-api-access-mwgr2\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.463699 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9ee1d3b-10d2-4b14-a66c-31028d58d293-log-httpd\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.465351 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9ee1d3b-10d2-4b14-a66c-31028d58d293-run-httpd\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.467251 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-scripts\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.467535 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.469423 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.472998 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-config-data\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.475267 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.479124 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwgr2\" (UniqueName: \"kubernetes.io/projected/f9ee1d3b-10d2-4b14-a66c-31028d58d293-kube-api-access-mwgr2\") pod \"ceilometer-0\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: I0127 14:07:33.583940 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:07:33 crc kubenswrapper[4914]: E0127 14:07:33.660939 4914 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="706aa5bb3437d8c68ea8dd9b787a21ad8d21686bd2e751e75a7ee53ec131a443" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 14:07:33 crc kubenswrapper[4914]: E0127 14:07:33.668896 4914 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="706aa5bb3437d8c68ea8dd9b787a21ad8d21686bd2e751e75a7ee53ec131a443" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 14:07:33 crc kubenswrapper[4914]: E0127 14:07:33.680768 4914 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="706aa5bb3437d8c68ea8dd9b787a21ad8d21686bd2e751e75a7ee53ec131a443" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 14:07:33 crc kubenswrapper[4914]: E0127 14:07:33.680858 4914 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8b24ff69-cfa5-4f60-a771-81dd0abda624" containerName="nova-scheduler-scheduler" Jan 27 14:07:34 crc kubenswrapper[4914]: W0127 14:07:34.031652 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9ee1d3b_10d2_4b14_a66c_31028d58d293.slice/crio-c22b7e1776cae308ed9d419725f12fef2f4cb69f38a93501fe9260c8f888d9c3 WatchSource:0}: Error finding container c22b7e1776cae308ed9d419725f12fef2f4cb69f38a93501fe9260c8f888d9c3: Status 404 returned error can't find the container with id c22b7e1776cae308ed9d419725f12fef2f4cb69f38a93501fe9260c8f888d9c3 Jan 27 14:07:34 crc kubenswrapper[4914]: I0127 14:07:34.033301 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:07:34 crc kubenswrapper[4914]: I0127 14:07:34.167359 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9ee1d3b-10d2-4b14-a66c-31028d58d293","Type":"ContainerStarted","Data":"c22b7e1776cae308ed9d419725f12fef2f4cb69f38a93501fe9260c8f888d9c3"} Jan 27 14:07:34 crc kubenswrapper[4914]: I0127 14:07:34.306697 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b16d38f9-5614-4637-a98f-9c47190ccff4" path="/var/lib/kubelet/pods/b16d38f9-5614-4637-a98f-9c47190ccff4/volumes" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.154144 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.185740 4914 generic.go:334] "Generic (PLEG): container finished" podID="8b24ff69-cfa5-4f60-a771-81dd0abda624" containerID="706aa5bb3437d8c68ea8dd9b787a21ad8d21686bd2e751e75a7ee53ec131a443" exitCode=0 Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.185863 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8b24ff69-cfa5-4f60-a771-81dd0abda624","Type":"ContainerDied","Data":"706aa5bb3437d8c68ea8dd9b787a21ad8d21686bd2e751e75a7ee53ec131a443"} Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.189301 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9ee1d3b-10d2-4b14-a66c-31028d58d293","Type":"ContainerStarted","Data":"395f2aa7bc2819e4d3d89940c2194681325c6f902f6ee2ac1828cee2fcb7d452"} Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.191983 4914 generic.go:334] "Generic (PLEG): container finished" podID="36e5891e-45f7-4898-8172-773b64a58b3a" containerID="83ee45e6c3e1df1a7a7d8c146c2b4685b950eb93db9359556e1044d3c3cf2b35" exitCode=0 Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.192020 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36e5891e-45f7-4898-8172-773b64a58b3a","Type":"ContainerDied","Data":"83ee45e6c3e1df1a7a7d8c146c2b4685b950eb93db9359556e1044d3c3cf2b35"} Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.192043 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36e5891e-45f7-4898-8172-773b64a58b3a","Type":"ContainerDied","Data":"f7ee44a454d44c7193b1477c3235d4e1965e8dc6b16f6ac4215d0dda5ee192fc"} Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.192063 4914 scope.go:117] "RemoveContainer" containerID="83ee45e6c3e1df1a7a7d8c146c2b4685b950eb93db9359556e1044d3c3cf2b35" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.192173 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.196010 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e5891e-45f7-4898-8172-773b64a58b3a-config-data\") pod \"36e5891e-45f7-4898-8172-773b64a58b3a\" (UID: \"36e5891e-45f7-4898-8172-773b64a58b3a\") " Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.196218 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e5891e-45f7-4898-8172-773b64a58b3a-combined-ca-bundle\") pod \"36e5891e-45f7-4898-8172-773b64a58b3a\" (UID: \"36e5891e-45f7-4898-8172-773b64a58b3a\") " Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.196269 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36e5891e-45f7-4898-8172-773b64a58b3a-logs\") pod \"36e5891e-45f7-4898-8172-773b64a58b3a\" (UID: \"36e5891e-45f7-4898-8172-773b64a58b3a\") " Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.196317 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh4vq\" (UniqueName: \"kubernetes.io/projected/36e5891e-45f7-4898-8172-773b64a58b3a-kube-api-access-mh4vq\") pod \"36e5891e-45f7-4898-8172-773b64a58b3a\" (UID: \"36e5891e-45f7-4898-8172-773b64a58b3a\") " Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.196992 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36e5891e-45f7-4898-8172-773b64a58b3a-logs" (OuterVolumeSpecName: "logs") pod "36e5891e-45f7-4898-8172-773b64a58b3a" (UID: "36e5891e-45f7-4898-8172-773b64a58b3a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.200858 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36e5891e-45f7-4898-8172-773b64a58b3a-kube-api-access-mh4vq" (OuterVolumeSpecName: "kube-api-access-mh4vq") pod "36e5891e-45f7-4898-8172-773b64a58b3a" (UID: "36e5891e-45f7-4898-8172-773b64a58b3a"). InnerVolumeSpecName "kube-api-access-mh4vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.214667 4914 scope.go:117] "RemoveContainer" containerID="6ad23d27f4a81ff0f796a2c63d93992d9d17cac7fefd9455b2b90c371e9c6256" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.224652 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36e5891e-45f7-4898-8172-773b64a58b3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36e5891e-45f7-4898-8172-773b64a58b3a" (UID: "36e5891e-45f7-4898-8172-773b64a58b3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.232759 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36e5891e-45f7-4898-8172-773b64a58b3a-config-data" (OuterVolumeSpecName: "config-data") pod "36e5891e-45f7-4898-8172-773b64a58b3a" (UID: "36e5891e-45f7-4898-8172-773b64a58b3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.233479 4914 scope.go:117] "RemoveContainer" containerID="83ee45e6c3e1df1a7a7d8c146c2b4685b950eb93db9359556e1044d3c3cf2b35" Jan 27 14:07:35 crc kubenswrapper[4914]: E0127 14:07:35.233936 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83ee45e6c3e1df1a7a7d8c146c2b4685b950eb93db9359556e1044d3c3cf2b35\": container with ID starting with 83ee45e6c3e1df1a7a7d8c146c2b4685b950eb93db9359556e1044d3c3cf2b35 not found: ID does not exist" containerID="83ee45e6c3e1df1a7a7d8c146c2b4685b950eb93db9359556e1044d3c3cf2b35" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.233970 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83ee45e6c3e1df1a7a7d8c146c2b4685b950eb93db9359556e1044d3c3cf2b35"} err="failed to get container status \"83ee45e6c3e1df1a7a7d8c146c2b4685b950eb93db9359556e1044d3c3cf2b35\": rpc error: code = NotFound desc = could not find container \"83ee45e6c3e1df1a7a7d8c146c2b4685b950eb93db9359556e1044d3c3cf2b35\": container with ID starting with 83ee45e6c3e1df1a7a7d8c146c2b4685b950eb93db9359556e1044d3c3cf2b35 not found: ID does not exist" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.234061 4914 scope.go:117] "RemoveContainer" containerID="6ad23d27f4a81ff0f796a2c63d93992d9d17cac7fefd9455b2b90c371e9c6256" Jan 27 14:07:35 crc kubenswrapper[4914]: E0127 14:07:35.234378 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ad23d27f4a81ff0f796a2c63d93992d9d17cac7fefd9455b2b90c371e9c6256\": container with ID starting with 6ad23d27f4a81ff0f796a2c63d93992d9d17cac7fefd9455b2b90c371e9c6256 not found: ID does not exist" containerID="6ad23d27f4a81ff0f796a2c63d93992d9d17cac7fefd9455b2b90c371e9c6256" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.234422 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad23d27f4a81ff0f796a2c63d93992d9d17cac7fefd9455b2b90c371e9c6256"} err="failed to get container status \"6ad23d27f4a81ff0f796a2c63d93992d9d17cac7fefd9455b2b90c371e9c6256\": rpc error: code = NotFound desc = could not find container \"6ad23d27f4a81ff0f796a2c63d93992d9d17cac7fefd9455b2b90c371e9c6256\": container with ID starting with 6ad23d27f4a81ff0f796a2c63d93992d9d17cac7fefd9455b2b90c371e9c6256 not found: ID does not exist" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.298803 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e5891e-45f7-4898-8172-773b64a58b3a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.298867 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e5891e-45f7-4898-8172-773b64a58b3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.298877 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36e5891e-45f7-4898-8172-773b64a58b3a-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.298886 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh4vq\" (UniqueName: \"kubernetes.io/projected/36e5891e-45f7-4898-8172-773b64a58b3a-kube-api-access-mh4vq\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.459868 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.503280 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.580153 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.603429 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsmlx\" (UniqueName: \"kubernetes.io/projected/8b24ff69-cfa5-4f60-a771-81dd0abda624-kube-api-access-rsmlx\") pod \"8b24ff69-cfa5-4f60-a771-81dd0abda624\" (UID: \"8b24ff69-cfa5-4f60-a771-81dd0abda624\") " Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.603709 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b24ff69-cfa5-4f60-a771-81dd0abda624-config-data\") pod \"8b24ff69-cfa5-4f60-a771-81dd0abda624\" (UID: \"8b24ff69-cfa5-4f60-a771-81dd0abda624\") " Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.603856 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b24ff69-cfa5-4f60-a771-81dd0abda624-combined-ca-bundle\") pod \"8b24ff69-cfa5-4f60-a771-81dd0abda624\" (UID: \"8b24ff69-cfa5-4f60-a771-81dd0abda624\") " Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.615663 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b24ff69-cfa5-4f60-a771-81dd0abda624-kube-api-access-rsmlx" (OuterVolumeSpecName: "kube-api-access-rsmlx") pod "8b24ff69-cfa5-4f60-a771-81dd0abda624" (UID: "8b24ff69-cfa5-4f60-a771-81dd0abda624"). InnerVolumeSpecName "kube-api-access-rsmlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.627935 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.645940 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:07:35 crc kubenswrapper[4914]: E0127 14:07:35.646510 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e5891e-45f7-4898-8172-773b64a58b3a" containerName="nova-api-api" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.646535 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e5891e-45f7-4898-8172-773b64a58b3a" containerName="nova-api-api" Jan 27 14:07:35 crc kubenswrapper[4914]: E0127 14:07:35.646554 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b24ff69-cfa5-4f60-a771-81dd0abda624" containerName="nova-scheduler-scheduler" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.646562 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b24ff69-cfa5-4f60-a771-81dd0abda624" containerName="nova-scheduler-scheduler" Jan 27 14:07:35 crc kubenswrapper[4914]: E0127 14:07:35.646594 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e5891e-45f7-4898-8172-773b64a58b3a" containerName="nova-api-log" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.646604 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e5891e-45f7-4898-8172-773b64a58b3a" containerName="nova-api-log" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.646922 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b24ff69-cfa5-4f60-a771-81dd0abda624" containerName="nova-scheduler-scheduler" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.646943 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="36e5891e-45f7-4898-8172-773b64a58b3a" containerName="nova-api-log" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.646961 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="36e5891e-45f7-4898-8172-773b64a58b3a" containerName="nova-api-api" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.648252 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.650762 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.661002 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.666808 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b24ff69-cfa5-4f60-a771-81dd0abda624-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b24ff69-cfa5-4f60-a771-81dd0abda624" (UID: "8b24ff69-cfa5-4f60-a771-81dd0abda624"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.668646 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b24ff69-cfa5-4f60-a771-81dd0abda624-config-data" (OuterVolumeSpecName: "config-data") pod "8b24ff69-cfa5-4f60-a771-81dd0abda624" (UID: "8b24ff69-cfa5-4f60-a771-81dd0abda624"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.705111 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e90f2995-a095-41fb-9f5f-67c297931376-logs\") pod \"nova-api-0\" (UID: \"e90f2995-a095-41fb-9f5f-67c297931376\") " pod="openstack/nova-api-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.705185 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e90f2995-a095-41fb-9f5f-67c297931376-config-data\") pod \"nova-api-0\" (UID: \"e90f2995-a095-41fb-9f5f-67c297931376\") " pod="openstack/nova-api-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.705217 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e90f2995-a095-41fb-9f5f-67c297931376-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e90f2995-a095-41fb-9f5f-67c297931376\") " pod="openstack/nova-api-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.705234 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqzdp\" (UniqueName: \"kubernetes.io/projected/e90f2995-a095-41fb-9f5f-67c297931376-kube-api-access-vqzdp\") pod \"nova-api-0\" (UID: \"e90f2995-a095-41fb-9f5f-67c297931376\") " pod="openstack/nova-api-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.705702 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b24ff69-cfa5-4f60-a771-81dd0abda624-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.705739 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b24ff69-cfa5-4f60-a771-81dd0abda624-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.705754 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsmlx\" (UniqueName: \"kubernetes.io/projected/8b24ff69-cfa5-4f60-a771-81dd0abda624-kube-api-access-rsmlx\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.807546 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e90f2995-a095-41fb-9f5f-67c297931376-logs\") pod \"nova-api-0\" (UID: \"e90f2995-a095-41fb-9f5f-67c297931376\") " pod="openstack/nova-api-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.807916 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e90f2995-a095-41fb-9f5f-67c297931376-config-data\") pod \"nova-api-0\" (UID: \"e90f2995-a095-41fb-9f5f-67c297931376\") " pod="openstack/nova-api-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.808122 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e90f2995-a095-41fb-9f5f-67c297931376-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e90f2995-a095-41fb-9f5f-67c297931376\") " pod="openstack/nova-api-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.808216 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqzdp\" (UniqueName: \"kubernetes.io/projected/e90f2995-a095-41fb-9f5f-67c297931376-kube-api-access-vqzdp\") pod \"nova-api-0\" (UID: \"e90f2995-a095-41fb-9f5f-67c297931376\") " pod="openstack/nova-api-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.809022 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e90f2995-a095-41fb-9f5f-67c297931376-logs\") pod \"nova-api-0\" (UID: \"e90f2995-a095-41fb-9f5f-67c297931376\") " pod="openstack/nova-api-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.812536 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e90f2995-a095-41fb-9f5f-67c297931376-config-data\") pod \"nova-api-0\" (UID: \"e90f2995-a095-41fb-9f5f-67c297931376\") " pod="openstack/nova-api-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.813797 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e90f2995-a095-41fb-9f5f-67c297931376-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e90f2995-a095-41fb-9f5f-67c297931376\") " pod="openstack/nova-api-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.828697 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqzdp\" (UniqueName: \"kubernetes.io/projected/e90f2995-a095-41fb-9f5f-67c297931376-kube-api-access-vqzdp\") pod \"nova-api-0\" (UID: \"e90f2995-a095-41fb-9f5f-67c297931376\") " pod="openstack/nova-api-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.867772 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.868153 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:07:35 crc kubenswrapper[4914]: I0127 14:07:35.988575 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.222622 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9ee1d3b-10d2-4b14-a66c-31028d58d293","Type":"ContainerStarted","Data":"814c861ffe221ff32dd134bad6891d8ad8203761e8536a2a6682c4e55ffc892b"} Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.225449 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.227374 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8b24ff69-cfa5-4f60-a771-81dd0abda624","Type":"ContainerDied","Data":"c183a32bfaa9ec0e6be78cbcb1966e58fe77800e2be545a5d30ae6bca079eb88"} Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.227428 4914 scope.go:117] "RemoveContainer" containerID="706aa5bb3437d8c68ea8dd9b787a21ad8d21686bd2e751e75a7ee53ec131a443" Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.317082 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36e5891e-45f7-4898-8172-773b64a58b3a" path="/var/lib/kubelet/pods/36e5891e-45f7-4898-8172-773b64a58b3a/volumes" Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.319803 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.338938 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.351267 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.352598 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.355720 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.360449 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.460592 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:07:36 crc kubenswrapper[4914]: W0127 14:07:36.467529 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode90f2995_a095_41fb_9f5f_67c297931376.slice/crio-25f9c955fa35da9040862870edafcfedb614e5e5a7cd88ca36ea39c298c44422 WatchSource:0}: Error finding container 25f9c955fa35da9040862870edafcfedb614e5e5a7cd88ca36ea39c298c44422: Status 404 returned error can't find the container with id 25f9c955fa35da9040862870edafcfedb614e5e5a7cd88ca36ea39c298c44422 Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.525392 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vbcj\" (UniqueName: \"kubernetes.io/projected/04aef3ba-9fc4-4b4e-821c-66c59df81d31-kube-api-access-6vbcj\") pod \"nova-scheduler-0\" (UID: \"04aef3ba-9fc4-4b4e-821c-66c59df81d31\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.525619 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aef3ba-9fc4-4b4e-821c-66c59df81d31-config-data\") pod \"nova-scheduler-0\" (UID: \"04aef3ba-9fc4-4b4e-821c-66c59df81d31\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.525795 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aef3ba-9fc4-4b4e-821c-66c59df81d31-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"04aef3ba-9fc4-4b4e-821c-66c59df81d31\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.629138 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vbcj\" (UniqueName: \"kubernetes.io/projected/04aef3ba-9fc4-4b4e-821c-66c59df81d31-kube-api-access-6vbcj\") pod \"nova-scheduler-0\" (UID: \"04aef3ba-9fc4-4b4e-821c-66c59df81d31\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.629280 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aef3ba-9fc4-4b4e-821c-66c59df81d31-config-data\") pod \"nova-scheduler-0\" (UID: \"04aef3ba-9fc4-4b4e-821c-66c59df81d31\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.629424 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aef3ba-9fc4-4b4e-821c-66c59df81d31-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"04aef3ba-9fc4-4b4e-821c-66c59df81d31\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.634466 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aef3ba-9fc4-4b4e-821c-66c59df81d31-config-data\") pod \"nova-scheduler-0\" (UID: \"04aef3ba-9fc4-4b4e-821c-66c59df81d31\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.634483 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aef3ba-9fc4-4b4e-821c-66c59df81d31-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"04aef3ba-9fc4-4b4e-821c-66c59df81d31\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.650910 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vbcj\" (UniqueName: \"kubernetes.io/projected/04aef3ba-9fc4-4b4e-821c-66c59df81d31-kube-api-access-6vbcj\") pod \"nova-scheduler-0\" (UID: \"04aef3ba-9fc4-4b4e-821c-66c59df81d31\") " pod="openstack/nova-scheduler-0" Jan 27 14:07:36 crc kubenswrapper[4914]: I0127 14:07:36.713381 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:07:37 crc kubenswrapper[4914]: I0127 14:07:37.245324 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e90f2995-a095-41fb-9f5f-67c297931376","Type":"ContainerStarted","Data":"6cdfab577c393ae857de2741b61ef3d78b9ba7a26b55a3fa4a7c14baf5cb9baf"} Jan 27 14:07:37 crc kubenswrapper[4914]: I0127 14:07:37.245555 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e90f2995-a095-41fb-9f5f-67c297931376","Type":"ContainerStarted","Data":"a5fe27b3f5b50cb541003fb27c563c1f81b342827d8835a32e30db73a6235604"} Jan 27 14:07:37 crc kubenswrapper[4914]: I0127 14:07:37.245571 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e90f2995-a095-41fb-9f5f-67c297931376","Type":"ContainerStarted","Data":"25f9c955fa35da9040862870edafcfedb614e5e5a7cd88ca36ea39c298c44422"} Jan 27 14:07:37 crc kubenswrapper[4914]: I0127 14:07:37.264143 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.264125392 podStartE2EDuration="2.264125392s" podCreationTimestamp="2026-01-27 14:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:37.263458374 +0000 UTC m=+1415.575808479" watchObservedRunningTime="2026-01-27 14:07:37.264125392 +0000 UTC m=+1415.576475477" Jan 27 14:07:37 crc kubenswrapper[4914]: I0127 14:07:37.275329 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9ee1d3b-10d2-4b14-a66c-31028d58d293","Type":"ContainerStarted","Data":"6cab203ff0db5e699a0278fd9921cad1759d056bd89b6db89400e252e04c5a8e"} Jan 27 14:07:37 crc kubenswrapper[4914]: I0127 14:07:37.832377 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:07:38 crc kubenswrapper[4914]: I0127 14:07:38.285662 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"04aef3ba-9fc4-4b4e-821c-66c59df81d31","Type":"ContainerStarted","Data":"9b820cf13d68952ff032818155814e0fd674bae225d6a06a6384f912089e623d"} Jan 27 14:07:38 crc kubenswrapper[4914]: I0127 14:07:38.286045 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"04aef3ba-9fc4-4b4e-821c-66c59df81d31","Type":"ContainerStarted","Data":"98efb747ed5d262ef3f21f527850d00212a4ff51428aa305a4ac47ef999d51a5"} Jan 27 14:07:38 crc kubenswrapper[4914]: I0127 14:07:38.308787 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b24ff69-cfa5-4f60-a771-81dd0abda624" path="/var/lib/kubelet/pods/8b24ff69-cfa5-4f60-a771-81dd0abda624/volumes" Jan 27 14:07:39 crc kubenswrapper[4914]: I0127 14:07:39.296792 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9ee1d3b-10d2-4b14-a66c-31028d58d293","Type":"ContainerStarted","Data":"d2a4ff8df0830fecbac1a552f9fd9d4708c45cd41f8c9e396342f00667ab4b0d"} Jan 27 14:07:39 crc kubenswrapper[4914]: I0127 14:07:39.322179 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.32215897 podStartE2EDuration="3.32215897s" podCreationTimestamp="2026-01-27 14:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:38.301410717 +0000 UTC m=+1416.613760812" watchObservedRunningTime="2026-01-27 14:07:39.32215897 +0000 UTC m=+1417.634509055" Jan 27 14:07:39 crc kubenswrapper[4914]: I0127 14:07:39.322613 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.190829485 podStartE2EDuration="6.322608782s" podCreationTimestamp="2026-01-27 14:07:33 +0000 UTC" firstStartedPulling="2026-01-27 14:07:34.033577497 +0000 UTC m=+1412.345927572" lastFinishedPulling="2026-01-27 14:07:38.165356784 +0000 UTC m=+1416.477706869" observedRunningTime="2026-01-27 14:07:39.31779127 +0000 UTC m=+1417.630141365" watchObservedRunningTime="2026-01-27 14:07:39.322608782 +0000 UTC m=+1417.634958867" Jan 27 14:07:40 crc kubenswrapper[4914]: I0127 14:07:40.320891 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:07:40 crc kubenswrapper[4914]: I0127 14:07:40.867892 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:07:40 crc kubenswrapper[4914]: I0127 14:07:40.868401 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:07:41 crc kubenswrapper[4914]: I0127 14:07:41.546478 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 14:07:41 crc kubenswrapper[4914]: I0127 14:07:41.714767 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 14:07:41 crc kubenswrapper[4914]: I0127 14:07:41.879028 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1f18bf8b-0605-4134-9164-6b260b45d655" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:07:41 crc kubenswrapper[4914]: I0127 14:07:41.879172 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1f18bf8b-0605-4134-9164-6b260b45d655" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:07:45 crc kubenswrapper[4914]: I0127 14:07:45.990697 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:07:45 crc kubenswrapper[4914]: I0127 14:07:45.991247 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:07:46 crc kubenswrapper[4914]: I0127 14:07:46.714237 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 14:07:46 crc kubenswrapper[4914]: I0127 14:07:46.743608 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 14:07:47 crc kubenswrapper[4914]: I0127 14:07:47.073004 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e90f2995-a095-41fb-9f5f-67c297931376" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:07:47 crc kubenswrapper[4914]: I0127 14:07:47.073101 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e90f2995-a095-41fb-9f5f-67c297931376" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:07:47 crc kubenswrapper[4914]: I0127 14:07:47.482844 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 14:07:50 crc kubenswrapper[4914]: I0127 14:07:50.874515 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 14:07:50 crc kubenswrapper[4914]: I0127 14:07:50.874932 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 14:07:50 crc kubenswrapper[4914]: I0127 14:07:50.881768 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 14:07:50 crc kubenswrapper[4914]: I0127 14:07:50.883820 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.388903 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.511613 4914 generic.go:334] "Generic (PLEG): container finished" podID="f66f9483-7be9-4f55-8e6e-144bbc391d55" containerID="d764b8cbe0290c40d76acdc15e120485bd5438556fe3c0346685b7ec6849ff51" exitCode=137 Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.511659 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f66f9483-7be9-4f55-8e6e-144bbc391d55","Type":"ContainerDied","Data":"d764b8cbe0290c40d76acdc15e120485bd5438556fe3c0346685b7ec6849ff51"} Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.511687 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f66f9483-7be9-4f55-8e6e-144bbc391d55","Type":"ContainerDied","Data":"4613dd9b0c7b3a0b694c3c7ce930e381e5f434f8a6182e56ae39516cf6b245b8"} Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.511685 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.511704 4914 scope.go:117] "RemoveContainer" containerID="d764b8cbe0290c40d76acdc15e120485bd5438556fe3c0346685b7ec6849ff51" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.516531 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brnjk\" (UniqueName: \"kubernetes.io/projected/f66f9483-7be9-4f55-8e6e-144bbc391d55-kube-api-access-brnjk\") pod \"f66f9483-7be9-4f55-8e6e-144bbc391d55\" (UID: \"f66f9483-7be9-4f55-8e6e-144bbc391d55\") " Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.516752 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66f9483-7be9-4f55-8e6e-144bbc391d55-config-data\") pod \"f66f9483-7be9-4f55-8e6e-144bbc391d55\" (UID: \"f66f9483-7be9-4f55-8e6e-144bbc391d55\") " Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.520414 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66f9483-7be9-4f55-8e6e-144bbc391d55-combined-ca-bundle\") pod \"f66f9483-7be9-4f55-8e6e-144bbc391d55\" (UID: \"f66f9483-7be9-4f55-8e6e-144bbc391d55\") " Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.535105 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f66f9483-7be9-4f55-8e6e-144bbc391d55-kube-api-access-brnjk" (OuterVolumeSpecName: "kube-api-access-brnjk") pod "f66f9483-7be9-4f55-8e6e-144bbc391d55" (UID: "f66f9483-7be9-4f55-8e6e-144bbc391d55"). InnerVolumeSpecName "kube-api-access-brnjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.541585 4914 scope.go:117] "RemoveContainer" containerID="d764b8cbe0290c40d76acdc15e120485bd5438556fe3c0346685b7ec6849ff51" Jan 27 14:07:54 crc kubenswrapper[4914]: E0127 14:07:54.542128 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d764b8cbe0290c40d76acdc15e120485bd5438556fe3c0346685b7ec6849ff51\": container with ID starting with d764b8cbe0290c40d76acdc15e120485bd5438556fe3c0346685b7ec6849ff51 not found: ID does not exist" containerID="d764b8cbe0290c40d76acdc15e120485bd5438556fe3c0346685b7ec6849ff51" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.542185 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d764b8cbe0290c40d76acdc15e120485bd5438556fe3c0346685b7ec6849ff51"} err="failed to get container status \"d764b8cbe0290c40d76acdc15e120485bd5438556fe3c0346685b7ec6849ff51\": rpc error: code = NotFound desc = could not find container \"d764b8cbe0290c40d76acdc15e120485bd5438556fe3c0346685b7ec6849ff51\": container with ID starting with d764b8cbe0290c40d76acdc15e120485bd5438556fe3c0346685b7ec6849ff51 not found: ID does not exist" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.556005 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f66f9483-7be9-4f55-8e6e-144bbc391d55-config-data" (OuterVolumeSpecName: "config-data") pod "f66f9483-7be9-4f55-8e6e-144bbc391d55" (UID: "f66f9483-7be9-4f55-8e6e-144bbc391d55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.559142 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f66f9483-7be9-4f55-8e6e-144bbc391d55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f66f9483-7be9-4f55-8e6e-144bbc391d55" (UID: "f66f9483-7be9-4f55-8e6e-144bbc391d55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.623113 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brnjk\" (UniqueName: \"kubernetes.io/projected/f66f9483-7be9-4f55-8e6e-144bbc391d55-kube-api-access-brnjk\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.623152 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66f9483-7be9-4f55-8e6e-144bbc391d55-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.623165 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66f9483-7be9-4f55-8e6e-144bbc391d55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.851653 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.861922 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.890697 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:07:54 crc kubenswrapper[4914]: E0127 14:07:54.891207 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66f9483-7be9-4f55-8e6e-144bbc391d55" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.891230 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66f9483-7be9-4f55-8e6e-144bbc391d55" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.891430 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f66f9483-7be9-4f55-8e6e-144bbc391d55" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.892189 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.894423 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.894890 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.895034 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.903407 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.929150 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.929315 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.929661 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.929723 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ljxc\" (UniqueName: \"kubernetes.io/projected/b01a3b5a-39b3-450a-acc4-e76987f7f506-kube-api-access-9ljxc\") pod \"nova-cell1-novncproxy-0\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:54 crc kubenswrapper[4914]: I0127 14:07:54.929810 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.031605 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.031693 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.031735 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.032708 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.032753 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ljxc\" (UniqueName: \"kubernetes.io/projected/b01a3b5a-39b3-450a-acc4-e76987f7f506-kube-api-access-9ljxc\") pod \"nova-cell1-novncproxy-0\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.036683 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.036767 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.036764 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.038646 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.056435 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ljxc\" (UniqueName: \"kubernetes.io/projected/b01a3b5a-39b3-450a-acc4-e76987f7f506-kube-api-access-9ljxc\") pod \"nova-cell1-novncproxy-0\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.212103 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.460867 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.525678 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b01a3b5a-39b3-450a-acc4-e76987f7f506","Type":"ContainerStarted","Data":"e6dd11b7cd86357c85dfaf97ec593dbee71ac433b224e1ffc5e289ed57cef6be"} Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.994570 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.995329 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.998742 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:07:55 crc kubenswrapper[4914]: I0127 14:07:55.999272 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.304364 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f66f9483-7be9-4f55-8e6e-144bbc391d55" path="/var/lib/kubelet/pods/f66f9483-7be9-4f55-8e6e-144bbc391d55/volumes" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.536314 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b01a3b5a-39b3-450a-acc4-e76987f7f506","Type":"ContainerStarted","Data":"1f930b3b6b87a9825ee5e47f0bfd988c0aa4340fdb03506c942c0ae815089987"} Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.536427 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.540849 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.561712 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.561685175 podStartE2EDuration="2.561685175s" podCreationTimestamp="2026-01-27 14:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:56.560296506 +0000 UTC m=+1434.872646631" watchObservedRunningTime="2026-01-27 14:07:56.561685175 +0000 UTC m=+1434.874035300" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.734474 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-v7nlp"] Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.738640 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.752366 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-v7nlp"] Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.867844 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-config\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.868173 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.868248 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.868270 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.868337 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-dns-svc\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.868423 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpb8c\" (UniqueName: \"kubernetes.io/projected/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-kube-api-access-fpb8c\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.969628 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpb8c\" (UniqueName: \"kubernetes.io/projected/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-kube-api-access-fpb8c\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.969708 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-config\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.969766 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.969870 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.969900 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.969931 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-dns-svc\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.970915 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-config\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.970955 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.971011 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.970994 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-dns-svc\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.971050 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:56 crc kubenswrapper[4914]: I0127 14:07:56.996739 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpb8c\" (UniqueName: \"kubernetes.io/projected/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-kube-api-access-fpb8c\") pod \"dnsmasq-dns-5ddd577785-v7nlp\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:57 crc kubenswrapper[4914]: I0127 14:07:57.079357 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:57 crc kubenswrapper[4914]: I0127 14:07:57.683045 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-v7nlp"] Jan 27 14:07:58 crc kubenswrapper[4914]: I0127 14:07:58.555878 4914 generic.go:334] "Generic (PLEG): container finished" podID="ed8eb59d-eb98-46a0-808b-678b4bd1d5ef" containerID="f6a4b339a800946a03774f043c586362ac2b5eef075716cad7fc383bc92c867a" exitCode=0 Jan 27 14:07:58 crc kubenswrapper[4914]: I0127 14:07:58.555977 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" event={"ID":"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef","Type":"ContainerDied","Data":"f6a4b339a800946a03774f043c586362ac2b5eef075716cad7fc383bc92c867a"} Jan 27 14:07:58 crc kubenswrapper[4914]: I0127 14:07:58.556446 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" event={"ID":"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef","Type":"ContainerStarted","Data":"3c170324e9d459871f9be7ecbf915e818c66ed55fc91d97319f16bc4eaf3f788"} Jan 27 14:07:58 crc kubenswrapper[4914]: I0127 14:07:58.767413 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:07:58 crc kubenswrapper[4914]: I0127 14:07:58.767988 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="ceilometer-central-agent" containerID="cri-o://395f2aa7bc2819e4d3d89940c2194681325c6f902f6ee2ac1828cee2fcb7d452" gracePeriod=30 Jan 27 14:07:58 crc kubenswrapper[4914]: I0127 14:07:58.768973 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="proxy-httpd" containerID="cri-o://d2a4ff8df0830fecbac1a552f9fd9d4708c45cd41f8c9e396342f00667ab4b0d" gracePeriod=30 Jan 27 14:07:58 crc kubenswrapper[4914]: I0127 14:07:58.769084 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="sg-core" containerID="cri-o://6cab203ff0db5e699a0278fd9921cad1759d056bd89b6db89400e252e04c5a8e" gracePeriod=30 Jan 27 14:07:58 crc kubenswrapper[4914]: I0127 14:07:58.769136 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="ceilometer-notification-agent" containerID="cri-o://814c861ffe221ff32dd134bad6891d8ad8203761e8536a2a6682c4e55ffc892b" gracePeriod=30 Jan 27 14:07:58 crc kubenswrapper[4914]: I0127 14:07:58.795157 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 14:07:59 crc kubenswrapper[4914]: I0127 14:07:59.179453 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:07:59 crc kubenswrapper[4914]: I0127 14:07:59.574552 4914 generic.go:334] "Generic (PLEG): container finished" podID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerID="d2a4ff8df0830fecbac1a552f9fd9d4708c45cd41f8c9e396342f00667ab4b0d" exitCode=0 Jan 27 14:07:59 crc kubenswrapper[4914]: I0127 14:07:59.574965 4914 generic.go:334] "Generic (PLEG): container finished" podID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerID="6cab203ff0db5e699a0278fd9921cad1759d056bd89b6db89400e252e04c5a8e" exitCode=2 Jan 27 14:07:59 crc kubenswrapper[4914]: I0127 14:07:59.575017 4914 generic.go:334] "Generic (PLEG): container finished" podID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerID="395f2aa7bc2819e4d3d89940c2194681325c6f902f6ee2ac1828cee2fcb7d452" exitCode=0 Jan 27 14:07:59 crc kubenswrapper[4914]: I0127 14:07:59.574911 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9ee1d3b-10d2-4b14-a66c-31028d58d293","Type":"ContainerDied","Data":"d2a4ff8df0830fecbac1a552f9fd9d4708c45cd41f8c9e396342f00667ab4b0d"} Jan 27 14:07:59 crc kubenswrapper[4914]: I0127 14:07:59.575100 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9ee1d3b-10d2-4b14-a66c-31028d58d293","Type":"ContainerDied","Data":"6cab203ff0db5e699a0278fd9921cad1759d056bd89b6db89400e252e04c5a8e"} Jan 27 14:07:59 crc kubenswrapper[4914]: I0127 14:07:59.575116 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9ee1d3b-10d2-4b14-a66c-31028d58d293","Type":"ContainerDied","Data":"395f2aa7bc2819e4d3d89940c2194681325c6f902f6ee2ac1828cee2fcb7d452"} Jan 27 14:07:59 crc kubenswrapper[4914]: I0127 14:07:59.577491 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e90f2995-a095-41fb-9f5f-67c297931376" containerName="nova-api-log" containerID="cri-o://a5fe27b3f5b50cb541003fb27c563c1f81b342827d8835a32e30db73a6235604" gracePeriod=30 Jan 27 14:07:59 crc kubenswrapper[4914]: I0127 14:07:59.577733 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e90f2995-a095-41fb-9f5f-67c297931376" containerName="nova-api-api" containerID="cri-o://6cdfab577c393ae857de2741b61ef3d78b9ba7a26b55a3fa4a7c14baf5cb9baf" gracePeriod=30 Jan 27 14:07:59 crc kubenswrapper[4914]: I0127 14:07:59.577758 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" event={"ID":"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef","Type":"ContainerStarted","Data":"033ba27cb7821e0546451327e09e9cb96e72a49a52be0876d9ff5d3c1933e93c"} Jan 27 14:07:59 crc kubenswrapper[4914]: I0127 14:07:59.579004 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:07:59 crc kubenswrapper[4914]: I0127 14:07:59.600697 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" podStartSLOduration=3.600678791 podStartE2EDuration="3.600678791s" podCreationTimestamp="2026-01-27 14:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:59.599126868 +0000 UTC m=+1437.911476953" watchObservedRunningTime="2026-01-27 14:07:59.600678791 +0000 UTC m=+1437.913028876" Jan 27 14:07:59 crc kubenswrapper[4914]: E0127 14:07:59.702700 4914 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode90f2995_a095_41fb_9f5f_67c297931376.slice/crio-a5fe27b3f5b50cb541003fb27c563c1f81b342827d8835a32e30db73a6235604.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode90f2995_a095_41fb_9f5f_67c297931376.slice/crio-conmon-a5fe27b3f5b50cb541003fb27c563c1f81b342827d8835a32e30db73a6235604.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:08:00 crc kubenswrapper[4914]: I0127 14:08:00.212430 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:00 crc kubenswrapper[4914]: I0127 14:08:00.587999 4914 generic.go:334] "Generic (PLEG): container finished" podID="e90f2995-a095-41fb-9f5f-67c297931376" containerID="a5fe27b3f5b50cb541003fb27c563c1f81b342827d8835a32e30db73a6235604" exitCode=143 Jan 27 14:08:00 crc kubenswrapper[4914]: I0127 14:08:00.588095 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e90f2995-a095-41fb-9f5f-67c297931376","Type":"ContainerDied","Data":"a5fe27b3f5b50cb541003fb27c563c1f81b342827d8835a32e30db73a6235604"} Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.315047 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.416045 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e90f2995-a095-41fb-9f5f-67c297931376-combined-ca-bundle\") pod \"e90f2995-a095-41fb-9f5f-67c297931376\" (UID: \"e90f2995-a095-41fb-9f5f-67c297931376\") " Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.416113 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqzdp\" (UniqueName: \"kubernetes.io/projected/e90f2995-a095-41fb-9f5f-67c297931376-kube-api-access-vqzdp\") pod \"e90f2995-a095-41fb-9f5f-67c297931376\" (UID: \"e90f2995-a095-41fb-9f5f-67c297931376\") " Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.416195 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e90f2995-a095-41fb-9f5f-67c297931376-logs\") pod \"e90f2995-a095-41fb-9f5f-67c297931376\" (UID: \"e90f2995-a095-41fb-9f5f-67c297931376\") " Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.416322 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e90f2995-a095-41fb-9f5f-67c297931376-config-data\") pod \"e90f2995-a095-41fb-9f5f-67c297931376\" (UID: \"e90f2995-a095-41fb-9f5f-67c297931376\") " Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.419063 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e90f2995-a095-41fb-9f5f-67c297931376-logs" (OuterVolumeSpecName: "logs") pod "e90f2995-a095-41fb-9f5f-67c297931376" (UID: "e90f2995-a095-41fb-9f5f-67c297931376"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.427713 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e90f2995-a095-41fb-9f5f-67c297931376-kube-api-access-vqzdp" (OuterVolumeSpecName: "kube-api-access-vqzdp") pod "e90f2995-a095-41fb-9f5f-67c297931376" (UID: "e90f2995-a095-41fb-9f5f-67c297931376"). InnerVolumeSpecName "kube-api-access-vqzdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.451054 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e90f2995-a095-41fb-9f5f-67c297931376-config-data" (OuterVolumeSpecName: "config-data") pod "e90f2995-a095-41fb-9f5f-67c297931376" (UID: "e90f2995-a095-41fb-9f5f-67c297931376"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.506363 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e90f2995-a095-41fb-9f5f-67c297931376-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e90f2995-a095-41fb-9f5f-67c297931376" (UID: "e90f2995-a095-41fb-9f5f-67c297931376"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.519902 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e90f2995-a095-41fb-9f5f-67c297931376-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.519981 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e90f2995-a095-41fb-9f5f-67c297931376-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.520053 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqzdp\" (UniqueName: \"kubernetes.io/projected/e90f2995-a095-41fb-9f5f-67c297931376-kube-api-access-vqzdp\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.520073 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e90f2995-a095-41fb-9f5f-67c297931376-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.543296 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.620927 4914 generic.go:334] "Generic (PLEG): container finished" podID="e90f2995-a095-41fb-9f5f-67c297931376" containerID="6cdfab577c393ae857de2741b61ef3d78b9ba7a26b55a3fa4a7c14baf5cb9baf" exitCode=0 Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.620979 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e90f2995-a095-41fb-9f5f-67c297931376","Type":"ContainerDied","Data":"6cdfab577c393ae857de2741b61ef3d78b9ba7a26b55a3fa4a7c14baf5cb9baf"} Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.620997 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.621044 4914 scope.go:117] "RemoveContainer" containerID="6cdfab577c393ae857de2741b61ef3d78b9ba7a26b55a3fa4a7c14baf5cb9baf" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.621030 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e90f2995-a095-41fb-9f5f-67c297931376","Type":"ContainerDied","Data":"25f9c955fa35da9040862870edafcfedb614e5e5a7cd88ca36ea39c298c44422"} Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.621664 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9ee1d3b-10d2-4b14-a66c-31028d58d293-run-httpd\") pod \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.621689 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwgr2\" (UniqueName: \"kubernetes.io/projected/f9ee1d3b-10d2-4b14-a66c-31028d58d293-kube-api-access-mwgr2\") pod \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.621775 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-sg-core-conf-yaml\") pod \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.622110 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ee1d3b-10d2-4b14-a66c-31028d58d293-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f9ee1d3b-10d2-4b14-a66c-31028d58d293" (UID: "f9ee1d3b-10d2-4b14-a66c-31028d58d293"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.622602 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-scripts\") pod \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.622638 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-config-data\") pod \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.622686 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-ceilometer-tls-certs\") pod \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.622799 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-combined-ca-bundle\") pod \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.622883 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9ee1d3b-10d2-4b14-a66c-31028d58d293-log-httpd\") pod \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\" (UID: \"f9ee1d3b-10d2-4b14-a66c-31028d58d293\") " Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.623320 4914 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9ee1d3b-10d2-4b14-a66c-31028d58d293-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.624797 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ee1d3b-10d2-4b14-a66c-31028d58d293-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f9ee1d3b-10d2-4b14-a66c-31028d58d293" (UID: "f9ee1d3b-10d2-4b14-a66c-31028d58d293"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.625451 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ee1d3b-10d2-4b14-a66c-31028d58d293-kube-api-access-mwgr2" (OuterVolumeSpecName: "kube-api-access-mwgr2") pod "f9ee1d3b-10d2-4b14-a66c-31028d58d293" (UID: "f9ee1d3b-10d2-4b14-a66c-31028d58d293"). InnerVolumeSpecName "kube-api-access-mwgr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.631355 4914 generic.go:334] "Generic (PLEG): container finished" podID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerID="814c861ffe221ff32dd134bad6891d8ad8203761e8536a2a6682c4e55ffc892b" exitCode=0 Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.631411 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9ee1d3b-10d2-4b14-a66c-31028d58d293","Type":"ContainerDied","Data":"814c861ffe221ff32dd134bad6891d8ad8203761e8536a2a6682c4e55ffc892b"} Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.631443 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9ee1d3b-10d2-4b14-a66c-31028d58d293","Type":"ContainerDied","Data":"c22b7e1776cae308ed9d419725f12fef2f4cb69f38a93501fe9260c8f888d9c3"} Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.631530 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.633105 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-scripts" (OuterVolumeSpecName: "scripts") pod "f9ee1d3b-10d2-4b14-a66c-31028d58d293" (UID: "f9ee1d3b-10d2-4b14-a66c-31028d58d293"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.650112 4914 scope.go:117] "RemoveContainer" containerID="a5fe27b3f5b50cb541003fb27c563c1f81b342827d8835a32e30db73a6235604" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.677430 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.687677 4914 scope.go:117] "RemoveContainer" containerID="6cdfab577c393ae857de2741b61ef3d78b9ba7a26b55a3fa4a7c14baf5cb9baf" Jan 27 14:08:03 crc kubenswrapper[4914]: E0127 14:08:03.688441 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cdfab577c393ae857de2741b61ef3d78b9ba7a26b55a3fa4a7c14baf5cb9baf\": container with ID starting with 6cdfab577c393ae857de2741b61ef3d78b9ba7a26b55a3fa4a7c14baf5cb9baf not found: ID does not exist" containerID="6cdfab577c393ae857de2741b61ef3d78b9ba7a26b55a3fa4a7c14baf5cb9baf" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.688468 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cdfab577c393ae857de2741b61ef3d78b9ba7a26b55a3fa4a7c14baf5cb9baf"} err="failed to get container status \"6cdfab577c393ae857de2741b61ef3d78b9ba7a26b55a3fa4a7c14baf5cb9baf\": rpc error: code = NotFound desc = could not find container \"6cdfab577c393ae857de2741b61ef3d78b9ba7a26b55a3fa4a7c14baf5cb9baf\": container with ID starting with 6cdfab577c393ae857de2741b61ef3d78b9ba7a26b55a3fa4a7c14baf5cb9baf not found: ID does not exist" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.688509 4914 scope.go:117] "RemoveContainer" containerID="a5fe27b3f5b50cb541003fb27c563c1f81b342827d8835a32e30db73a6235604" Jan 27 14:08:03 crc kubenswrapper[4914]: E0127 14:08:03.689047 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5fe27b3f5b50cb541003fb27c563c1f81b342827d8835a32e30db73a6235604\": container with ID starting with a5fe27b3f5b50cb541003fb27c563c1f81b342827d8835a32e30db73a6235604 not found: ID does not exist" containerID="a5fe27b3f5b50cb541003fb27c563c1f81b342827d8835a32e30db73a6235604" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.689066 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5fe27b3f5b50cb541003fb27c563c1f81b342827d8835a32e30db73a6235604"} err="failed to get container status \"a5fe27b3f5b50cb541003fb27c563c1f81b342827d8835a32e30db73a6235604\": rpc error: code = NotFound desc = could not find container \"a5fe27b3f5b50cb541003fb27c563c1f81b342827d8835a32e30db73a6235604\": container with ID starting with a5fe27b3f5b50cb541003fb27c563c1f81b342827d8835a32e30db73a6235604 not found: ID does not exist" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.689081 4914 scope.go:117] "RemoveContainer" containerID="d2a4ff8df0830fecbac1a552f9fd9d4708c45cd41f8c9e396342f00667ab4b0d" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.690483 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f9ee1d3b-10d2-4b14-a66c-31028d58d293" (UID: "f9ee1d3b-10d2-4b14-a66c-31028d58d293"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.712576 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.713921 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f9ee1d3b-10d2-4b14-a66c-31028d58d293" (UID: "f9ee1d3b-10d2-4b14-a66c-31028d58d293"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.727996 4914 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9ee1d3b-10d2-4b14-a66c-31028d58d293-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.728039 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwgr2\" (UniqueName: \"kubernetes.io/projected/f9ee1d3b-10d2-4b14-a66c-31028d58d293-kube-api-access-mwgr2\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.728054 4914 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.728066 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.728079 4914 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.736383 4914 scope.go:117] "RemoveContainer" containerID="6cab203ff0db5e699a0278fd9921cad1759d056bd89b6db89400e252e04c5a8e" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.742770 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:03 crc kubenswrapper[4914]: E0127 14:08:03.743247 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e90f2995-a095-41fb-9f5f-67c297931376" containerName="nova-api-log" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.743272 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="e90f2995-a095-41fb-9f5f-67c297931376" containerName="nova-api-log" Jan 27 14:08:03 crc kubenswrapper[4914]: E0127 14:08:03.743293 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="sg-core" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.743302 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="sg-core" Jan 27 14:08:03 crc kubenswrapper[4914]: E0127 14:08:03.743312 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e90f2995-a095-41fb-9f5f-67c297931376" containerName="nova-api-api" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.743334 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="e90f2995-a095-41fb-9f5f-67c297931376" containerName="nova-api-api" Jan 27 14:08:03 crc kubenswrapper[4914]: E0127 14:08:03.743364 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="ceilometer-central-agent" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.743374 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="ceilometer-central-agent" Jan 27 14:08:03 crc kubenswrapper[4914]: E0127 14:08:03.743387 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="proxy-httpd" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.743396 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="proxy-httpd" Jan 27 14:08:03 crc kubenswrapper[4914]: E0127 14:08:03.743405 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="ceilometer-notification-agent" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.743413 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="ceilometer-notification-agent" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.743655 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="proxy-httpd" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.743673 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="ceilometer-central-agent" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.743691 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="e90f2995-a095-41fb-9f5f-67c297931376" containerName="nova-api-log" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.743706 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="ceilometer-notification-agent" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.743723 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" containerName="sg-core" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.743743 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="e90f2995-a095-41fb-9f5f-67c297931376" containerName="nova-api-api" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.744979 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.756296 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.756466 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.756563 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.762501 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.765597 4914 scope.go:117] "RemoveContainer" containerID="814c861ffe221ff32dd134bad6891d8ad8203761e8536a2a6682c4e55ffc892b" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.791246 4914 scope.go:117] "RemoveContainer" containerID="395f2aa7bc2819e4d3d89940c2194681325c6f902f6ee2ac1828cee2fcb7d452" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.809484 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-config-data" (OuterVolumeSpecName: "config-data") pod "f9ee1d3b-10d2-4b14-a66c-31028d58d293" (UID: "f9ee1d3b-10d2-4b14-a66c-31028d58d293"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.812028 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9ee1d3b-10d2-4b14-a66c-31028d58d293" (UID: "f9ee1d3b-10d2-4b14-a66c-31028d58d293"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.814389 4914 scope.go:117] "RemoveContainer" containerID="d2a4ff8df0830fecbac1a552f9fd9d4708c45cd41f8c9e396342f00667ab4b0d" Jan 27 14:08:03 crc kubenswrapper[4914]: E0127 14:08:03.814772 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2a4ff8df0830fecbac1a552f9fd9d4708c45cd41f8c9e396342f00667ab4b0d\": container with ID starting with d2a4ff8df0830fecbac1a552f9fd9d4708c45cd41f8c9e396342f00667ab4b0d not found: ID does not exist" containerID="d2a4ff8df0830fecbac1a552f9fd9d4708c45cd41f8c9e396342f00667ab4b0d" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.814814 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a4ff8df0830fecbac1a552f9fd9d4708c45cd41f8c9e396342f00667ab4b0d"} err="failed to get container status \"d2a4ff8df0830fecbac1a552f9fd9d4708c45cd41f8c9e396342f00667ab4b0d\": rpc error: code = NotFound desc = could not find container \"d2a4ff8df0830fecbac1a552f9fd9d4708c45cd41f8c9e396342f00667ab4b0d\": container with ID starting with d2a4ff8df0830fecbac1a552f9fd9d4708c45cd41f8c9e396342f00667ab4b0d not found: ID does not exist" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.814859 4914 scope.go:117] "RemoveContainer" containerID="6cab203ff0db5e699a0278fd9921cad1759d056bd89b6db89400e252e04c5a8e" Jan 27 14:08:03 crc kubenswrapper[4914]: E0127 14:08:03.815520 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cab203ff0db5e699a0278fd9921cad1759d056bd89b6db89400e252e04c5a8e\": container with ID starting with 6cab203ff0db5e699a0278fd9921cad1759d056bd89b6db89400e252e04c5a8e not found: ID does not exist" containerID="6cab203ff0db5e699a0278fd9921cad1759d056bd89b6db89400e252e04c5a8e" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.815571 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cab203ff0db5e699a0278fd9921cad1759d056bd89b6db89400e252e04c5a8e"} err="failed to get container status \"6cab203ff0db5e699a0278fd9921cad1759d056bd89b6db89400e252e04c5a8e\": rpc error: code = NotFound desc = could not find container \"6cab203ff0db5e699a0278fd9921cad1759d056bd89b6db89400e252e04c5a8e\": container with ID starting with 6cab203ff0db5e699a0278fd9921cad1759d056bd89b6db89400e252e04c5a8e not found: ID does not exist" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.815597 4914 scope.go:117] "RemoveContainer" containerID="814c861ffe221ff32dd134bad6891d8ad8203761e8536a2a6682c4e55ffc892b" Jan 27 14:08:03 crc kubenswrapper[4914]: E0127 14:08:03.815923 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"814c861ffe221ff32dd134bad6891d8ad8203761e8536a2a6682c4e55ffc892b\": container with ID starting with 814c861ffe221ff32dd134bad6891d8ad8203761e8536a2a6682c4e55ffc892b not found: ID does not exist" containerID="814c861ffe221ff32dd134bad6891d8ad8203761e8536a2a6682c4e55ffc892b" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.815961 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"814c861ffe221ff32dd134bad6891d8ad8203761e8536a2a6682c4e55ffc892b"} err="failed to get container status \"814c861ffe221ff32dd134bad6891d8ad8203761e8536a2a6682c4e55ffc892b\": rpc error: code = NotFound desc = could not find container \"814c861ffe221ff32dd134bad6891d8ad8203761e8536a2a6682c4e55ffc892b\": container with ID starting with 814c861ffe221ff32dd134bad6891d8ad8203761e8536a2a6682c4e55ffc892b not found: ID does not exist" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.815978 4914 scope.go:117] "RemoveContainer" containerID="395f2aa7bc2819e4d3d89940c2194681325c6f902f6ee2ac1828cee2fcb7d452" Jan 27 14:08:03 crc kubenswrapper[4914]: E0127 14:08:03.816201 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"395f2aa7bc2819e4d3d89940c2194681325c6f902f6ee2ac1828cee2fcb7d452\": container with ID starting with 395f2aa7bc2819e4d3d89940c2194681325c6f902f6ee2ac1828cee2fcb7d452 not found: ID does not exist" containerID="395f2aa7bc2819e4d3d89940c2194681325c6f902f6ee2ac1828cee2fcb7d452" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.816230 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"395f2aa7bc2819e4d3d89940c2194681325c6f902f6ee2ac1828cee2fcb7d452"} err="failed to get container status \"395f2aa7bc2819e4d3d89940c2194681325c6f902f6ee2ac1828cee2fcb7d452\": rpc error: code = NotFound desc = could not find container \"395f2aa7bc2819e4d3d89940c2194681325c6f902f6ee2ac1828cee2fcb7d452\": container with ID starting with 395f2aa7bc2819e4d3d89940c2194681325c6f902f6ee2ac1828cee2fcb7d452 not found: ID does not exist" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.829727 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9d52bb9-2efc-42fa-a2c9-75f671775895-logs\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.829794 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-config-data\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.829864 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.829917 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-public-tls-certs\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.829940 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wn88\" (UniqueName: \"kubernetes.io/projected/f9d52bb9-2efc-42fa-a2c9-75f671775895-kube-api-access-7wn88\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.830025 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.830110 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.830125 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ee1d3b-10d2-4b14-a66c-31028d58d293-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.931499 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9d52bb9-2efc-42fa-a2c9-75f671775895-logs\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.931545 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-config-data\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.931585 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.931623 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-public-tls-certs\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.931641 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wn88\" (UniqueName: \"kubernetes.io/projected/f9d52bb9-2efc-42fa-a2c9-75f671775895-kube-api-access-7wn88\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.931704 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.936098 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9d52bb9-2efc-42fa-a2c9-75f671775895-logs\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.939528 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.939599 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-public-tls-certs\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.940433 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-config-data\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.947611 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:03 crc kubenswrapper[4914]: I0127 14:08:03.950031 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wn88\" (UniqueName: \"kubernetes.io/projected/f9d52bb9-2efc-42fa-a2c9-75f671775895-kube-api-access-7wn88\") pod \"nova-api-0\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " pod="openstack/nova-api-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.049594 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.058223 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.074128 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.077626 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.079809 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.085706 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.086385 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.088895 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.096058 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.136013 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed0f4049-d67b-4534-a821-8cbefb969a63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.136248 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8mm8\" (UniqueName: \"kubernetes.io/projected/ed0f4049-d67b-4534-a821-8cbefb969a63-kube-api-access-h8mm8\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.136308 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed0f4049-d67b-4534-a821-8cbefb969a63-log-httpd\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.136329 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed0f4049-d67b-4534-a821-8cbefb969a63-scripts\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.136532 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed0f4049-d67b-4534-a821-8cbefb969a63-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.136708 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed0f4049-d67b-4534-a821-8cbefb969a63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.136805 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed0f4049-d67b-4534-a821-8cbefb969a63-run-httpd\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.136979 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed0f4049-d67b-4534-a821-8cbefb969a63-config-data\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.238756 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8mm8\" (UniqueName: \"kubernetes.io/projected/ed0f4049-d67b-4534-a821-8cbefb969a63-kube-api-access-h8mm8\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.238898 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed0f4049-d67b-4534-a821-8cbefb969a63-log-httpd\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.238949 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed0f4049-d67b-4534-a821-8cbefb969a63-scripts\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.239042 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed0f4049-d67b-4534-a821-8cbefb969a63-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.239200 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed0f4049-d67b-4534-a821-8cbefb969a63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.239419 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed0f4049-d67b-4534-a821-8cbefb969a63-run-httpd\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.239569 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed0f4049-d67b-4534-a821-8cbefb969a63-log-httpd\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.239982 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed0f4049-d67b-4534-a821-8cbefb969a63-config-data\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.240061 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed0f4049-d67b-4534-a821-8cbefb969a63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.240105 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed0f4049-d67b-4534-a821-8cbefb969a63-run-httpd\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.243941 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed0f4049-d67b-4534-a821-8cbefb969a63-scripts\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.244162 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed0f4049-d67b-4534-a821-8cbefb969a63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.244766 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed0f4049-d67b-4534-a821-8cbefb969a63-config-data\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.245951 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed0f4049-d67b-4534-a821-8cbefb969a63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.251559 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed0f4049-d67b-4534-a821-8cbefb969a63-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.258075 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8mm8\" (UniqueName: \"kubernetes.io/projected/ed0f4049-d67b-4534-a821-8cbefb969a63-kube-api-access-h8mm8\") pod \"ceilometer-0\" (UID: \"ed0f4049-d67b-4534-a821-8cbefb969a63\") " pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.308565 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e90f2995-a095-41fb-9f5f-67c297931376" path="/var/lib/kubelet/pods/e90f2995-a095-41fb-9f5f-67c297931376/volumes" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.309375 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ee1d3b-10d2-4b14-a66c-31028d58d293" path="/var/lib/kubelet/pods/f9ee1d3b-10d2-4b14-a66c-31028d58d293/volumes" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.491721 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.633708 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.654377 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9d52bb9-2efc-42fa-a2c9-75f671775895","Type":"ContainerStarted","Data":"57821b5ff4dca906f5f4486739d46d82053fb5e2dbd32a5c67636a60c7eac3c7"} Jan 27 14:08:04 crc kubenswrapper[4914]: W0127 14:08:04.974688 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded0f4049_d67b_4534_a821_8cbefb969a63.slice/crio-a85de6f7d5d432fabd97f6c7ffc3eaf7c74ff824bbd57cd63a04bec4f73c1a93 WatchSource:0}: Error finding container a85de6f7d5d432fabd97f6c7ffc3eaf7c74ff824bbd57cd63a04bec4f73c1a93: Status 404 returned error can't find the container with id a85de6f7d5d432fabd97f6c7ffc3eaf7c74ff824bbd57cd63a04bec4f73c1a93 Jan 27 14:08:04 crc kubenswrapper[4914]: I0127 14:08:04.977361 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.212558 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.230419 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.665128 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed0f4049-d67b-4534-a821-8cbefb969a63","Type":"ContainerStarted","Data":"a85de6f7d5d432fabd97f6c7ffc3eaf7c74ff824bbd57cd63a04bec4f73c1a93"} Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.669251 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9d52bb9-2efc-42fa-a2c9-75f671775895","Type":"ContainerStarted","Data":"46eb8a6fc2df24df5351a89dec08958fae760146196bc5a631aaf078ad65075e"} Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.669285 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9d52bb9-2efc-42fa-a2c9-75f671775895","Type":"ContainerStarted","Data":"08a9bb1683b779dd9b09f58894a4efea149eb5f79f06fb911bc5a730d13fa4bd"} Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.683364 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.695729 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.695709673 podStartE2EDuration="2.695709673s" podCreationTimestamp="2026-01-27 14:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:08:05.693485332 +0000 UTC m=+1444.005835497" watchObservedRunningTime="2026-01-27 14:08:05.695709673 +0000 UTC m=+1444.008059758" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.847461 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-gzxfk"] Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.851686 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gzxfk" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.854198 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.854400 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.864150 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-gzxfk"] Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.874074 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-config-data\") pod \"nova-cell1-cell-mapping-gzxfk\" (UID: \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\") " pod="openstack/nova-cell1-cell-mapping-gzxfk" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.874120 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-scripts\") pod \"nova-cell1-cell-mapping-gzxfk\" (UID: \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\") " pod="openstack/nova-cell1-cell-mapping-gzxfk" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.874201 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8k9m\" (UniqueName: \"kubernetes.io/projected/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-kube-api-access-d8k9m\") pod \"nova-cell1-cell-mapping-gzxfk\" (UID: \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\") " pod="openstack/nova-cell1-cell-mapping-gzxfk" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.874271 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gzxfk\" (UID: \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\") " pod="openstack/nova-cell1-cell-mapping-gzxfk" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.976143 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-config-data\") pod \"nova-cell1-cell-mapping-gzxfk\" (UID: \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\") " pod="openstack/nova-cell1-cell-mapping-gzxfk" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.976479 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-scripts\") pod \"nova-cell1-cell-mapping-gzxfk\" (UID: \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\") " pod="openstack/nova-cell1-cell-mapping-gzxfk" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.976520 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8k9m\" (UniqueName: \"kubernetes.io/projected/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-kube-api-access-d8k9m\") pod \"nova-cell1-cell-mapping-gzxfk\" (UID: \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\") " pod="openstack/nova-cell1-cell-mapping-gzxfk" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.976605 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gzxfk\" (UID: \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\") " pod="openstack/nova-cell1-cell-mapping-gzxfk" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.981301 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-scripts\") pod \"nova-cell1-cell-mapping-gzxfk\" (UID: \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\") " pod="openstack/nova-cell1-cell-mapping-gzxfk" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.981424 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-config-data\") pod \"nova-cell1-cell-mapping-gzxfk\" (UID: \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\") " pod="openstack/nova-cell1-cell-mapping-gzxfk" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.982421 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gzxfk\" (UID: \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\") " pod="openstack/nova-cell1-cell-mapping-gzxfk" Jan 27 14:08:05 crc kubenswrapper[4914]: I0127 14:08:05.994208 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8k9m\" (UniqueName: \"kubernetes.io/projected/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-kube-api-access-d8k9m\") pod \"nova-cell1-cell-mapping-gzxfk\" (UID: \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\") " pod="openstack/nova-cell1-cell-mapping-gzxfk" Jan 27 14:08:06 crc kubenswrapper[4914]: I0127 14:08:06.187947 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gzxfk" Jan 27 14:08:06 crc kubenswrapper[4914]: I0127 14:08:06.664548 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-gzxfk"] Jan 27 14:08:06 crc kubenswrapper[4914]: I0127 14:08:06.683733 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed0f4049-d67b-4534-a821-8cbefb969a63","Type":"ContainerStarted","Data":"43a15dd6de5085c8096925f2046b71a72b0467ce7f3e50071bf3d9ce3fd95e40"} Jan 27 14:08:06 crc kubenswrapper[4914]: I0127 14:08:06.686174 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gzxfk" event={"ID":"fc2c043b-2bd3-4238-bcd2-f44f4191cad8","Type":"ContainerStarted","Data":"8fb77e63f2ba9ea67aea3806debf16b01bd8baaab6b35c3fa799c39b087bb0e1"} Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.081190 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.159105 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-6wnl7"] Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.159461 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" podUID="a108d6f6-d3b1-4480-b9d4-ff8273c10546" containerName="dnsmasq-dns" containerID="cri-o://5ceb4ed78648f4597d51777c757f02a0b246eed38d9872e9ed2ab3f53a2d196b" gracePeriod=10 Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.697272 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed0f4049-d67b-4534-a821-8cbefb969a63","Type":"ContainerStarted","Data":"fc13303c7ef66197e43184abfc617d22010204a63a5560d996f38e8a742dd09f"} Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.699902 4914 generic.go:334] "Generic (PLEG): container finished" podID="a108d6f6-d3b1-4480-b9d4-ff8273c10546" containerID="5ceb4ed78648f4597d51777c757f02a0b246eed38d9872e9ed2ab3f53a2d196b" exitCode=0 Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.699939 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" event={"ID":"a108d6f6-d3b1-4480-b9d4-ff8273c10546","Type":"ContainerDied","Data":"5ceb4ed78648f4597d51777c757f02a0b246eed38d9872e9ed2ab3f53a2d196b"} Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.699989 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" event={"ID":"a108d6f6-d3b1-4480-b9d4-ff8273c10546","Type":"ContainerDied","Data":"d5b38ba496afe19a1b3c78cec05117edc65b7ca27be96bdd27257b308458e6c0"} Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.700005 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5b38ba496afe19a1b3c78cec05117edc65b7ca27be96bdd27257b308458e6c0" Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.702259 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gzxfk" event={"ID":"fc2c043b-2bd3-4238-bcd2-f44f4191cad8","Type":"ContainerStarted","Data":"55712590807c72cd4dc50ff38ec9f3e3a3dfe9138a73ee090b854e60939b247b"} Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.729182 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-gzxfk" podStartSLOduration=2.729143476 podStartE2EDuration="2.729143476s" podCreationTimestamp="2026-01-27 14:08:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:08:07.718378229 +0000 UTC m=+1446.030728314" watchObservedRunningTime="2026-01-27 14:08:07.729143476 +0000 UTC m=+1446.041493561" Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.757490 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.817568 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-dns-swift-storage-0\") pod \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.817641 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-dns-svc\") pod \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.817709 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-ovsdbserver-nb\") pod \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.817791 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-ovsdbserver-sb\") pod \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.817909 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-config\") pod \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.817931 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kptzm\" (UniqueName: \"kubernetes.io/projected/a108d6f6-d3b1-4480-b9d4-ff8273c10546-kube-api-access-kptzm\") pod \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\" (UID: \"a108d6f6-d3b1-4480-b9d4-ff8273c10546\") " Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.847063 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a108d6f6-d3b1-4480-b9d4-ff8273c10546-kube-api-access-kptzm" (OuterVolumeSpecName: "kube-api-access-kptzm") pod "a108d6f6-d3b1-4480-b9d4-ff8273c10546" (UID: "a108d6f6-d3b1-4480-b9d4-ff8273c10546"). InnerVolumeSpecName "kube-api-access-kptzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.925467 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kptzm\" (UniqueName: \"kubernetes.io/projected/a108d6f6-d3b1-4480-b9d4-ff8273c10546-kube-api-access-kptzm\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.935742 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-config" (OuterVolumeSpecName: "config") pod "a108d6f6-d3b1-4480-b9d4-ff8273c10546" (UID: "a108d6f6-d3b1-4480-b9d4-ff8273c10546"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.936170 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a108d6f6-d3b1-4480-b9d4-ff8273c10546" (UID: "a108d6f6-d3b1-4480-b9d4-ff8273c10546"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.936212 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a108d6f6-d3b1-4480-b9d4-ff8273c10546" (UID: "a108d6f6-d3b1-4480-b9d4-ff8273c10546"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.945521 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a108d6f6-d3b1-4480-b9d4-ff8273c10546" (UID: "a108d6f6-d3b1-4480-b9d4-ff8273c10546"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:08:07 crc kubenswrapper[4914]: I0127 14:08:07.968471 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a108d6f6-d3b1-4480-b9d4-ff8273c10546" (UID: "a108d6f6-d3b1-4480-b9d4-ff8273c10546"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:08:08 crc kubenswrapper[4914]: I0127 14:08:08.027320 4914 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:08 crc kubenswrapper[4914]: I0127 14:08:08.027362 4914 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:08 crc kubenswrapper[4914]: I0127 14:08:08.027374 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:08 crc kubenswrapper[4914]: I0127 14:08:08.027385 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:08 crc kubenswrapper[4914]: I0127 14:08:08.027395 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a108d6f6-d3b1-4480-b9d4-ff8273c10546-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:08 crc kubenswrapper[4914]: I0127 14:08:08.712931 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed0f4049-d67b-4534-a821-8cbefb969a63","Type":"ContainerStarted","Data":"530ebe403bc752cc90fe1b639470f9c7f7aeead48e1eac17c92a3c37e900d124"} Jan 27 14:08:08 crc kubenswrapper[4914]: I0127 14:08:08.713002 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-6wnl7" Jan 27 14:08:08 crc kubenswrapper[4914]: I0127 14:08:08.744948 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-6wnl7"] Jan 27 14:08:08 crc kubenswrapper[4914]: I0127 14:08:08.757623 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-6wnl7"] Jan 27 14:08:10 crc kubenswrapper[4914]: I0127 14:08:10.306197 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a108d6f6-d3b1-4480-b9d4-ff8273c10546" path="/var/lib/kubelet/pods/a108d6f6-d3b1-4480-b9d4-ff8273c10546/volumes" Jan 27 14:08:11 crc kubenswrapper[4914]: I0127 14:08:11.746408 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed0f4049-d67b-4534-a821-8cbefb969a63","Type":"ContainerStarted","Data":"ece0e72b89f18d7b30c06bceba927cc7068239d8d9736cc16dde588f441ebd85"} Jan 27 14:08:11 crc kubenswrapper[4914]: I0127 14:08:11.747040 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:08:11 crc kubenswrapper[4914]: I0127 14:08:11.785143 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.005070246 podStartE2EDuration="7.785117151s" podCreationTimestamp="2026-01-27 14:08:04 +0000 UTC" firstStartedPulling="2026-01-27 14:08:04.977089855 +0000 UTC m=+1443.289439940" lastFinishedPulling="2026-01-27 14:08:10.75713674 +0000 UTC m=+1449.069486845" observedRunningTime="2026-01-27 14:08:11.773982144 +0000 UTC m=+1450.086332249" watchObservedRunningTime="2026-01-27 14:08:11.785117151 +0000 UTC m=+1450.097467246" Jan 27 14:08:12 crc kubenswrapper[4914]: I0127 14:08:12.759015 4914 generic.go:334] "Generic (PLEG): container finished" podID="fc2c043b-2bd3-4238-bcd2-f44f4191cad8" containerID="55712590807c72cd4dc50ff38ec9f3e3a3dfe9138a73ee090b854e60939b247b" exitCode=0 Jan 27 14:08:12 crc kubenswrapper[4914]: I0127 14:08:12.759159 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gzxfk" event={"ID":"fc2c043b-2bd3-4238-bcd2-f44f4191cad8","Type":"ContainerDied","Data":"55712590807c72cd4dc50ff38ec9f3e3a3dfe9138a73ee090b854e60939b247b"} Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.075428 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.084289 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.154220 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gzxfk" Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.298652 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-config-data\") pod \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\" (UID: \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\") " Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.299118 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-combined-ca-bundle\") pod \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\" (UID: \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\") " Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.299284 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8k9m\" (UniqueName: \"kubernetes.io/projected/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-kube-api-access-d8k9m\") pod \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\" (UID: \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\") " Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.299469 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-scripts\") pod \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\" (UID: \"fc2c043b-2bd3-4238-bcd2-f44f4191cad8\") " Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.320129 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-scripts" (OuterVolumeSpecName: "scripts") pod "fc2c043b-2bd3-4238-bcd2-f44f4191cad8" (UID: "fc2c043b-2bd3-4238-bcd2-f44f4191cad8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.320447 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-kube-api-access-d8k9m" (OuterVolumeSpecName: "kube-api-access-d8k9m") pod "fc2c043b-2bd3-4238-bcd2-f44f4191cad8" (UID: "fc2c043b-2bd3-4238-bcd2-f44f4191cad8"). InnerVolumeSpecName "kube-api-access-d8k9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.330949 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-config-data" (OuterVolumeSpecName: "config-data") pod "fc2c043b-2bd3-4238-bcd2-f44f4191cad8" (UID: "fc2c043b-2bd3-4238-bcd2-f44f4191cad8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.331315 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc2c043b-2bd3-4238-bcd2-f44f4191cad8" (UID: "fc2c043b-2bd3-4238-bcd2-f44f4191cad8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.403994 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8k9m\" (UniqueName: \"kubernetes.io/projected/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-kube-api-access-d8k9m\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.404038 4914 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.404054 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.404071 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2c043b-2bd3-4238-bcd2-f44f4191cad8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.787697 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gzxfk" Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.787765 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gzxfk" event={"ID":"fc2c043b-2bd3-4238-bcd2-f44f4191cad8","Type":"ContainerDied","Data":"8fb77e63f2ba9ea67aea3806debf16b01bd8baaab6b35c3fa799c39b087bb0e1"} Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.788656 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fb77e63f2ba9ea67aea3806debf16b01bd8baaab6b35c3fa799c39b087bb0e1" Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.983015 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.992551 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:08:14 crc kubenswrapper[4914]: I0127 14:08:14.992757 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="04aef3ba-9fc4-4b4e-821c-66c59df81d31" containerName="nova-scheduler-scheduler" containerID="cri-o://9b820cf13d68952ff032818155814e0fd674bae225d6a06a6384f912089e623d" gracePeriod=30 Jan 27 14:08:15 crc kubenswrapper[4914]: I0127 14:08:15.058307 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:08:15 crc kubenswrapper[4914]: I0127 14:08:15.058565 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1f18bf8b-0605-4134-9164-6b260b45d655" containerName="nova-metadata-log" containerID="cri-o://1a1ae5ea7ef1d93ee16288df8fa20c5c933aead4a3285658c76915e7246df314" gracePeriod=30 Jan 27 14:08:15 crc kubenswrapper[4914]: I0127 14:08:15.058672 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1f18bf8b-0605-4134-9164-6b260b45d655" containerName="nova-metadata-metadata" containerID="cri-o://d2c90702edddcfa7e140951a3eb81347b66cf038b59705282e65abafb3f7f7de" gracePeriod=30 Jan 27 14:08:15 crc kubenswrapper[4914]: I0127 14:08:15.083066 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9d52bb9-2efc-42fa-a2c9-75f671775895" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.215:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:08:15 crc kubenswrapper[4914]: I0127 14:08:15.090105 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9d52bb9-2efc-42fa-a2c9-75f671775895" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.215:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:08:15 crc kubenswrapper[4914]: I0127 14:08:15.797282 4914 generic.go:334] "Generic (PLEG): container finished" podID="1f18bf8b-0605-4134-9164-6b260b45d655" containerID="1a1ae5ea7ef1d93ee16288df8fa20c5c933aead4a3285658c76915e7246df314" exitCode=143 Jan 27 14:08:15 crc kubenswrapper[4914]: I0127 14:08:15.797777 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f9d52bb9-2efc-42fa-a2c9-75f671775895" containerName="nova-api-log" containerID="cri-o://08a9bb1683b779dd9b09f58894a4efea149eb5f79f06fb911bc5a730d13fa4bd" gracePeriod=30 Jan 27 14:08:15 crc kubenswrapper[4914]: I0127 14:08:15.797353 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f18bf8b-0605-4134-9164-6b260b45d655","Type":"ContainerDied","Data":"1a1ae5ea7ef1d93ee16288df8fa20c5c933aead4a3285658c76915e7246df314"} Jan 27 14:08:15 crc kubenswrapper[4914]: I0127 14:08:15.798005 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f9d52bb9-2efc-42fa-a2c9-75f671775895" containerName="nova-api-api" containerID="cri-o://46eb8a6fc2df24df5351a89dec08958fae760146196bc5a631aaf078ad65075e" gracePeriod=30 Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.544549 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.654133 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vbcj\" (UniqueName: \"kubernetes.io/projected/04aef3ba-9fc4-4b4e-821c-66c59df81d31-kube-api-access-6vbcj\") pod \"04aef3ba-9fc4-4b4e-821c-66c59df81d31\" (UID: \"04aef3ba-9fc4-4b4e-821c-66c59df81d31\") " Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.654201 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aef3ba-9fc4-4b4e-821c-66c59df81d31-config-data\") pod \"04aef3ba-9fc4-4b4e-821c-66c59df81d31\" (UID: \"04aef3ba-9fc4-4b4e-821c-66c59df81d31\") " Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.654261 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aef3ba-9fc4-4b4e-821c-66c59df81d31-combined-ca-bundle\") pod \"04aef3ba-9fc4-4b4e-821c-66c59df81d31\" (UID: \"04aef3ba-9fc4-4b4e-821c-66c59df81d31\") " Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.664028 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04aef3ba-9fc4-4b4e-821c-66c59df81d31-kube-api-access-6vbcj" (OuterVolumeSpecName: "kube-api-access-6vbcj") pod "04aef3ba-9fc4-4b4e-821c-66c59df81d31" (UID: "04aef3ba-9fc4-4b4e-821c-66c59df81d31"). InnerVolumeSpecName "kube-api-access-6vbcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.680335 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04aef3ba-9fc4-4b4e-821c-66c59df81d31-config-data" (OuterVolumeSpecName: "config-data") pod "04aef3ba-9fc4-4b4e-821c-66c59df81d31" (UID: "04aef3ba-9fc4-4b4e-821c-66c59df81d31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.696865 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04aef3ba-9fc4-4b4e-821c-66c59df81d31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04aef3ba-9fc4-4b4e-821c-66c59df81d31" (UID: "04aef3ba-9fc4-4b4e-821c-66c59df81d31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.756659 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vbcj\" (UniqueName: \"kubernetes.io/projected/04aef3ba-9fc4-4b4e-821c-66c59df81d31-kube-api-access-6vbcj\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.756692 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aef3ba-9fc4-4b4e-821c-66c59df81d31-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.756701 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aef3ba-9fc4-4b4e-821c-66c59df81d31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.807066 4914 generic.go:334] "Generic (PLEG): container finished" podID="04aef3ba-9fc4-4b4e-821c-66c59df81d31" containerID="9b820cf13d68952ff032818155814e0fd674bae225d6a06a6384f912089e623d" exitCode=0 Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.807101 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.807141 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"04aef3ba-9fc4-4b4e-821c-66c59df81d31","Type":"ContainerDied","Data":"9b820cf13d68952ff032818155814e0fd674bae225d6a06a6384f912089e623d"} Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.807196 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"04aef3ba-9fc4-4b4e-821c-66c59df81d31","Type":"ContainerDied","Data":"98efb747ed5d262ef3f21f527850d00212a4ff51428aa305a4ac47ef999d51a5"} Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.807218 4914 scope.go:117] "RemoveContainer" containerID="9b820cf13d68952ff032818155814e0fd674bae225d6a06a6384f912089e623d" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.809141 4914 generic.go:334] "Generic (PLEG): container finished" podID="f9d52bb9-2efc-42fa-a2c9-75f671775895" containerID="08a9bb1683b779dd9b09f58894a4efea149eb5f79f06fb911bc5a730d13fa4bd" exitCode=143 Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.809191 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9d52bb9-2efc-42fa-a2c9-75f671775895","Type":"ContainerDied","Data":"08a9bb1683b779dd9b09f58894a4efea149eb5f79f06fb911bc5a730d13fa4bd"} Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.834705 4914 scope.go:117] "RemoveContainer" containerID="9b820cf13d68952ff032818155814e0fd674bae225d6a06a6384f912089e623d" Jan 27 14:08:16 crc kubenswrapper[4914]: E0127 14:08:16.835274 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b820cf13d68952ff032818155814e0fd674bae225d6a06a6384f912089e623d\": container with ID starting with 9b820cf13d68952ff032818155814e0fd674bae225d6a06a6384f912089e623d not found: ID does not exist" containerID="9b820cf13d68952ff032818155814e0fd674bae225d6a06a6384f912089e623d" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.835324 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b820cf13d68952ff032818155814e0fd674bae225d6a06a6384f912089e623d"} err="failed to get container status \"9b820cf13d68952ff032818155814e0fd674bae225d6a06a6384f912089e623d\": rpc error: code = NotFound desc = could not find container \"9b820cf13d68952ff032818155814e0fd674bae225d6a06a6384f912089e623d\": container with ID starting with 9b820cf13d68952ff032818155814e0fd674bae225d6a06a6384f912089e623d not found: ID does not exist" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.836270 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.843247 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.856986 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:08:16 crc kubenswrapper[4914]: E0127 14:08:16.857411 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2c043b-2bd3-4238-bcd2-f44f4191cad8" containerName="nova-manage" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.857430 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2c043b-2bd3-4238-bcd2-f44f4191cad8" containerName="nova-manage" Jan 27 14:08:16 crc kubenswrapper[4914]: E0127 14:08:16.857441 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a108d6f6-d3b1-4480-b9d4-ff8273c10546" containerName="init" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.857447 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="a108d6f6-d3b1-4480-b9d4-ff8273c10546" containerName="init" Jan 27 14:08:16 crc kubenswrapper[4914]: E0127 14:08:16.857480 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04aef3ba-9fc4-4b4e-821c-66c59df81d31" containerName="nova-scheduler-scheduler" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.857486 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="04aef3ba-9fc4-4b4e-821c-66c59df81d31" containerName="nova-scheduler-scheduler" Jan 27 14:08:16 crc kubenswrapper[4914]: E0127 14:08:16.857504 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a108d6f6-d3b1-4480-b9d4-ff8273c10546" containerName="dnsmasq-dns" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.857512 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="a108d6f6-d3b1-4480-b9d4-ff8273c10546" containerName="dnsmasq-dns" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.857677 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="04aef3ba-9fc4-4b4e-821c-66c59df81d31" containerName="nova-scheduler-scheduler" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.857710 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="a108d6f6-d3b1-4480-b9d4-ff8273c10546" containerName="dnsmasq-dns" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.857725 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc2c043b-2bd3-4238-bcd2-f44f4191cad8" containerName="nova-manage" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.858418 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.863797 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.865016 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.959252 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64163376-81f8-423f-9d43-78fb10db0c61-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"64163376-81f8-423f-9d43-78fb10db0c61\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.959348 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5kfc\" (UniqueName: \"kubernetes.io/projected/64163376-81f8-423f-9d43-78fb10db0c61-kube-api-access-q5kfc\") pod \"nova-scheduler-0\" (UID: \"64163376-81f8-423f-9d43-78fb10db0c61\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:16 crc kubenswrapper[4914]: I0127 14:08:16.959436 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64163376-81f8-423f-9d43-78fb10db0c61-config-data\") pod \"nova-scheduler-0\" (UID: \"64163376-81f8-423f-9d43-78fb10db0c61\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:17 crc kubenswrapper[4914]: I0127 14:08:17.061365 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64163376-81f8-423f-9d43-78fb10db0c61-config-data\") pod \"nova-scheduler-0\" (UID: \"64163376-81f8-423f-9d43-78fb10db0c61\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:17 crc kubenswrapper[4914]: I0127 14:08:17.061505 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64163376-81f8-423f-9d43-78fb10db0c61-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"64163376-81f8-423f-9d43-78fb10db0c61\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:17 crc kubenswrapper[4914]: I0127 14:08:17.061575 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5kfc\" (UniqueName: \"kubernetes.io/projected/64163376-81f8-423f-9d43-78fb10db0c61-kube-api-access-q5kfc\") pod \"nova-scheduler-0\" (UID: \"64163376-81f8-423f-9d43-78fb10db0c61\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:17 crc kubenswrapper[4914]: I0127 14:08:17.065334 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64163376-81f8-423f-9d43-78fb10db0c61-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"64163376-81f8-423f-9d43-78fb10db0c61\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:17 crc kubenswrapper[4914]: I0127 14:08:17.073183 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64163376-81f8-423f-9d43-78fb10db0c61-config-data\") pod \"nova-scheduler-0\" (UID: \"64163376-81f8-423f-9d43-78fb10db0c61\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:17 crc kubenswrapper[4914]: I0127 14:08:17.078548 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5kfc\" (UniqueName: \"kubernetes.io/projected/64163376-81f8-423f-9d43-78fb10db0c61-kube-api-access-q5kfc\") pod \"nova-scheduler-0\" (UID: \"64163376-81f8-423f-9d43-78fb10db0c61\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:17 crc kubenswrapper[4914]: I0127 14:08:17.181286 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:08:17 crc kubenswrapper[4914]: I0127 14:08:17.618250 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:08:17 crc kubenswrapper[4914]: I0127 14:08:17.820824 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"64163376-81f8-423f-9d43-78fb10db0c61","Type":"ContainerStarted","Data":"ba3158515f565709a07131333748df124c9a1c0ddea0036d8abe59379a749f02"} Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.196232 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1f18bf8b-0605-4134-9164-6b260b45d655" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": read tcp 10.217.0.2:35432->10.217.0.208:8775: read: connection reset by peer" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.196244 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1f18bf8b-0605-4134-9164-6b260b45d655" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": read tcp 10.217.0.2:35442->10.217.0.208:8775: read: connection reset by peer" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.311341 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04aef3ba-9fc4-4b4e-821c-66c59df81d31" path="/var/lib/kubelet/pods/04aef3ba-9fc4-4b4e-821c-66c59df81d31/volumes" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.706501 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.807886 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-config-data\") pod \"1f18bf8b-0605-4134-9164-6b260b45d655\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.807941 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f18bf8b-0605-4134-9164-6b260b45d655-logs\") pod \"1f18bf8b-0605-4134-9164-6b260b45d655\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.808074 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-nova-metadata-tls-certs\") pod \"1f18bf8b-0605-4134-9164-6b260b45d655\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.808135 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-combined-ca-bundle\") pod \"1f18bf8b-0605-4134-9164-6b260b45d655\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.808201 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd7m4\" (UniqueName: \"kubernetes.io/projected/1f18bf8b-0605-4134-9164-6b260b45d655-kube-api-access-rd7m4\") pod \"1f18bf8b-0605-4134-9164-6b260b45d655\" (UID: \"1f18bf8b-0605-4134-9164-6b260b45d655\") " Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.808605 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f18bf8b-0605-4134-9164-6b260b45d655-logs" (OuterVolumeSpecName: "logs") pod "1f18bf8b-0605-4134-9164-6b260b45d655" (UID: "1f18bf8b-0605-4134-9164-6b260b45d655"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.818104 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f18bf8b-0605-4134-9164-6b260b45d655-kube-api-access-rd7m4" (OuterVolumeSpecName: "kube-api-access-rd7m4") pod "1f18bf8b-0605-4134-9164-6b260b45d655" (UID: "1f18bf8b-0605-4134-9164-6b260b45d655"). InnerVolumeSpecName "kube-api-access-rd7m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.832761 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-config-data" (OuterVolumeSpecName: "config-data") pod "1f18bf8b-0605-4134-9164-6b260b45d655" (UID: "1f18bf8b-0605-4134-9164-6b260b45d655"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.835796 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"64163376-81f8-423f-9d43-78fb10db0c61","Type":"ContainerStarted","Data":"9f01477867b9f7253cce742efe299f3ba03fa9e906dde0caf1d4c8aaee046e44"} Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.838188 4914 generic.go:334] "Generic (PLEG): container finished" podID="1f18bf8b-0605-4134-9164-6b260b45d655" containerID="d2c90702edddcfa7e140951a3eb81347b66cf038b59705282e65abafb3f7f7de" exitCode=0 Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.838226 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f18bf8b-0605-4134-9164-6b260b45d655","Type":"ContainerDied","Data":"d2c90702edddcfa7e140951a3eb81347b66cf038b59705282e65abafb3f7f7de"} Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.838245 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f18bf8b-0605-4134-9164-6b260b45d655","Type":"ContainerDied","Data":"e902e3ecd6a842f33b6591c675545515e060f269fe93944f8a917e482702bedf"} Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.838262 4914 scope.go:117] "RemoveContainer" containerID="d2c90702edddcfa7e140951a3eb81347b66cf038b59705282e65abafb3f7f7de" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.838361 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.844804 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f18bf8b-0605-4134-9164-6b260b45d655" (UID: "1f18bf8b-0605-4134-9164-6b260b45d655"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.857819 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.857796268 podStartE2EDuration="2.857796268s" podCreationTimestamp="2026-01-27 14:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:08:18.852448291 +0000 UTC m=+1457.164798396" watchObservedRunningTime="2026-01-27 14:08:18.857796268 +0000 UTC m=+1457.170146353" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.902972 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1f18bf8b-0605-4134-9164-6b260b45d655" (UID: "1f18bf8b-0605-4134-9164-6b260b45d655"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.911073 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.911130 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd7m4\" (UniqueName: \"kubernetes.io/projected/1f18bf8b-0605-4134-9164-6b260b45d655-kube-api-access-rd7m4\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.911146 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.911192 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f18bf8b-0605-4134-9164-6b260b45d655-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.911202 4914 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f18bf8b-0605-4134-9164-6b260b45d655-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.965598 4914 scope.go:117] "RemoveContainer" containerID="1a1ae5ea7ef1d93ee16288df8fa20c5c933aead4a3285658c76915e7246df314" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.994142 4914 scope.go:117] "RemoveContainer" containerID="d2c90702edddcfa7e140951a3eb81347b66cf038b59705282e65abafb3f7f7de" Jan 27 14:08:18 crc kubenswrapper[4914]: E0127 14:08:18.994842 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2c90702edddcfa7e140951a3eb81347b66cf038b59705282e65abafb3f7f7de\": container with ID starting with d2c90702edddcfa7e140951a3eb81347b66cf038b59705282e65abafb3f7f7de not found: ID does not exist" containerID="d2c90702edddcfa7e140951a3eb81347b66cf038b59705282e65abafb3f7f7de" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.994884 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2c90702edddcfa7e140951a3eb81347b66cf038b59705282e65abafb3f7f7de"} err="failed to get container status \"d2c90702edddcfa7e140951a3eb81347b66cf038b59705282e65abafb3f7f7de\": rpc error: code = NotFound desc = could not find container \"d2c90702edddcfa7e140951a3eb81347b66cf038b59705282e65abafb3f7f7de\": container with ID starting with d2c90702edddcfa7e140951a3eb81347b66cf038b59705282e65abafb3f7f7de not found: ID does not exist" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.994909 4914 scope.go:117] "RemoveContainer" containerID="1a1ae5ea7ef1d93ee16288df8fa20c5c933aead4a3285658c76915e7246df314" Jan 27 14:08:18 crc kubenswrapper[4914]: E0127 14:08:18.995313 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1ae5ea7ef1d93ee16288df8fa20c5c933aead4a3285658c76915e7246df314\": container with ID starting with 1a1ae5ea7ef1d93ee16288df8fa20c5c933aead4a3285658c76915e7246df314 not found: ID does not exist" containerID="1a1ae5ea7ef1d93ee16288df8fa20c5c933aead4a3285658c76915e7246df314" Jan 27 14:08:18 crc kubenswrapper[4914]: I0127 14:08:18.995356 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1ae5ea7ef1d93ee16288df8fa20c5c933aead4a3285658c76915e7246df314"} err="failed to get container status \"1a1ae5ea7ef1d93ee16288df8fa20c5c933aead4a3285658c76915e7246df314\": rpc error: code = NotFound desc = could not find container \"1a1ae5ea7ef1d93ee16288df8fa20c5c933aead4a3285658c76915e7246df314\": container with ID starting with 1a1ae5ea7ef1d93ee16288df8fa20c5c933aead4a3285658c76915e7246df314 not found: ID does not exist" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.178708 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.188582 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.198069 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:08:19 crc kubenswrapper[4914]: E0127 14:08:19.198464 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f18bf8b-0605-4134-9164-6b260b45d655" containerName="nova-metadata-log" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.198482 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f18bf8b-0605-4134-9164-6b260b45d655" containerName="nova-metadata-log" Jan 27 14:08:19 crc kubenswrapper[4914]: E0127 14:08:19.198523 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f18bf8b-0605-4134-9164-6b260b45d655" containerName="nova-metadata-metadata" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.198531 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f18bf8b-0605-4134-9164-6b260b45d655" containerName="nova-metadata-metadata" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.198717 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f18bf8b-0605-4134-9164-6b260b45d655" containerName="nova-metadata-metadata" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.198737 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f18bf8b-0605-4134-9164-6b260b45d655" containerName="nova-metadata-log" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.199660 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.203247 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.203501 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.219939 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.318388 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2rd9\" (UniqueName: \"kubernetes.io/projected/bd68e412-1f67-4c60-b79d-637c57123f0b-kube-api-access-j2rd9\") pod \"nova-metadata-0\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.318450 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd68e412-1f67-4c60-b79d-637c57123f0b-logs\") pod \"nova-metadata-0\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.318505 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.318552 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.318578 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-config-data\") pod \"nova-metadata-0\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.420427 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd68e412-1f67-4c60-b79d-637c57123f0b-logs\") pod \"nova-metadata-0\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.420496 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.420594 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.420630 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-config-data\") pod \"nova-metadata-0\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.420894 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2rd9\" (UniqueName: \"kubernetes.io/projected/bd68e412-1f67-4c60-b79d-637c57123f0b-kube-api-access-j2rd9\") pod \"nova-metadata-0\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.421678 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd68e412-1f67-4c60-b79d-637c57123f0b-logs\") pod \"nova-metadata-0\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.425701 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.425753 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.426618 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-config-data\") pod \"nova-metadata-0\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.440632 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2rd9\" (UniqueName: \"kubernetes.io/projected/bd68e412-1f67-4c60-b79d-637c57123f0b-kube-api-access-j2rd9\") pod \"nova-metadata-0\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.521502 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:08:19 crc kubenswrapper[4914]: I0127 14:08:19.973898 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.306626 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f18bf8b-0605-4134-9164-6b260b45d655" path="/var/lib/kubelet/pods/1f18bf8b-0605-4134-9164-6b260b45d655/volumes" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.528402 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.667523 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wn88\" (UniqueName: \"kubernetes.io/projected/f9d52bb9-2efc-42fa-a2c9-75f671775895-kube-api-access-7wn88\") pod \"f9d52bb9-2efc-42fa-a2c9-75f671775895\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.667932 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9d52bb9-2efc-42fa-a2c9-75f671775895-logs\") pod \"f9d52bb9-2efc-42fa-a2c9-75f671775895\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.667987 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-config-data\") pod \"f9d52bb9-2efc-42fa-a2c9-75f671775895\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.668093 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-combined-ca-bundle\") pod \"f9d52bb9-2efc-42fa-a2c9-75f671775895\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.668170 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-public-tls-certs\") pod \"f9d52bb9-2efc-42fa-a2c9-75f671775895\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.668231 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-internal-tls-certs\") pod \"f9d52bb9-2efc-42fa-a2c9-75f671775895\" (UID: \"f9d52bb9-2efc-42fa-a2c9-75f671775895\") " Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.668619 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9d52bb9-2efc-42fa-a2c9-75f671775895-logs" (OuterVolumeSpecName: "logs") pod "f9d52bb9-2efc-42fa-a2c9-75f671775895" (UID: "f9d52bb9-2efc-42fa-a2c9-75f671775895"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.677938 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9d52bb9-2efc-42fa-a2c9-75f671775895-kube-api-access-7wn88" (OuterVolumeSpecName: "kube-api-access-7wn88") pod "f9d52bb9-2efc-42fa-a2c9-75f671775895" (UID: "f9d52bb9-2efc-42fa-a2c9-75f671775895"). InnerVolumeSpecName "kube-api-access-7wn88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.696918 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-config-data" (OuterVolumeSpecName: "config-data") pod "f9d52bb9-2efc-42fa-a2c9-75f671775895" (UID: "f9d52bb9-2efc-42fa-a2c9-75f671775895"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.716534 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f9d52bb9-2efc-42fa-a2c9-75f671775895" (UID: "f9d52bb9-2efc-42fa-a2c9-75f671775895"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.718587 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9d52bb9-2efc-42fa-a2c9-75f671775895" (UID: "f9d52bb9-2efc-42fa-a2c9-75f671775895"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.724728 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f9d52bb9-2efc-42fa-a2c9-75f671775895" (UID: "f9d52bb9-2efc-42fa-a2c9-75f671775895"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.770448 4914 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.770485 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wn88\" (UniqueName: \"kubernetes.io/projected/f9d52bb9-2efc-42fa-a2c9-75f671775895-kube-api-access-7wn88\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.770502 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9d52bb9-2efc-42fa-a2c9-75f671775895-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.770514 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.770524 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.770533 4914 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d52bb9-2efc-42fa-a2c9-75f671775895-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.860293 4914 generic.go:334] "Generic (PLEG): container finished" podID="f9d52bb9-2efc-42fa-a2c9-75f671775895" containerID="46eb8a6fc2df24df5351a89dec08958fae760146196bc5a631aaf078ad65075e" exitCode=0 Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.860383 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.860769 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9d52bb9-2efc-42fa-a2c9-75f671775895","Type":"ContainerDied","Data":"46eb8a6fc2df24df5351a89dec08958fae760146196bc5a631aaf078ad65075e"} Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.860873 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9d52bb9-2efc-42fa-a2c9-75f671775895","Type":"ContainerDied","Data":"57821b5ff4dca906f5f4486739d46d82053fb5e2dbd32a5c67636a60c7eac3c7"} Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.860899 4914 scope.go:117] "RemoveContainer" containerID="46eb8a6fc2df24df5351a89dec08958fae760146196bc5a631aaf078ad65075e" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.865029 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bd68e412-1f67-4c60-b79d-637c57123f0b","Type":"ContainerStarted","Data":"32604d895b189f3ec330c7bea7a4260135a444c8cd0c64c3e811058bbe76ff75"} Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.865071 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bd68e412-1f67-4c60-b79d-637c57123f0b","Type":"ContainerStarted","Data":"9162de6a685299d2514f45a968e2c1694e05f953e2309c98ec13a20cd544547e"} Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.865080 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bd68e412-1f67-4c60-b79d-637c57123f0b","Type":"ContainerStarted","Data":"4c7c80ea3af927b74a4cb4b8bbd71ea2040b4134afe4b159ebbb5864b87a5ce9"} Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.897120 4914 scope.go:117] "RemoveContainer" containerID="08a9bb1683b779dd9b09f58894a4efea149eb5f79f06fb911bc5a730d13fa4bd" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.898591 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.8985711730000001 podStartE2EDuration="1.898571173s" podCreationTimestamp="2026-01-27 14:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:08:20.883454447 +0000 UTC m=+1459.195804542" watchObservedRunningTime="2026-01-27 14:08:20.898571173 +0000 UTC m=+1459.210921258" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.923775 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.924445 4914 scope.go:117] "RemoveContainer" containerID="46eb8a6fc2df24df5351a89dec08958fae760146196bc5a631aaf078ad65075e" Jan 27 14:08:20 crc kubenswrapper[4914]: E0127 14:08:20.924968 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46eb8a6fc2df24df5351a89dec08958fae760146196bc5a631aaf078ad65075e\": container with ID starting with 46eb8a6fc2df24df5351a89dec08958fae760146196bc5a631aaf078ad65075e not found: ID does not exist" containerID="46eb8a6fc2df24df5351a89dec08958fae760146196bc5a631aaf078ad65075e" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.925074 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46eb8a6fc2df24df5351a89dec08958fae760146196bc5a631aaf078ad65075e"} err="failed to get container status \"46eb8a6fc2df24df5351a89dec08958fae760146196bc5a631aaf078ad65075e\": rpc error: code = NotFound desc = could not find container \"46eb8a6fc2df24df5351a89dec08958fae760146196bc5a631aaf078ad65075e\": container with ID starting with 46eb8a6fc2df24df5351a89dec08958fae760146196bc5a631aaf078ad65075e not found: ID does not exist" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.925157 4914 scope.go:117] "RemoveContainer" containerID="08a9bb1683b779dd9b09f58894a4efea149eb5f79f06fb911bc5a730d13fa4bd" Jan 27 14:08:20 crc kubenswrapper[4914]: E0127 14:08:20.925584 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08a9bb1683b779dd9b09f58894a4efea149eb5f79f06fb911bc5a730d13fa4bd\": container with ID starting with 08a9bb1683b779dd9b09f58894a4efea149eb5f79f06fb911bc5a730d13fa4bd not found: ID does not exist" containerID="08a9bb1683b779dd9b09f58894a4efea149eb5f79f06fb911bc5a730d13fa4bd" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.925635 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08a9bb1683b779dd9b09f58894a4efea149eb5f79f06fb911bc5a730d13fa4bd"} err="failed to get container status \"08a9bb1683b779dd9b09f58894a4efea149eb5f79f06fb911bc5a730d13fa4bd\": rpc error: code = NotFound desc = could not find container \"08a9bb1683b779dd9b09f58894a4efea149eb5f79f06fb911bc5a730d13fa4bd\": container with ID starting with 08a9bb1683b779dd9b09f58894a4efea149eb5f79f06fb911bc5a730d13fa4bd not found: ID does not exist" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.933599 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.942251 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:20 crc kubenswrapper[4914]: E0127 14:08:20.942986 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d52bb9-2efc-42fa-a2c9-75f671775895" containerName="nova-api-api" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.943054 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d52bb9-2efc-42fa-a2c9-75f671775895" containerName="nova-api-api" Jan 27 14:08:20 crc kubenswrapper[4914]: E0127 14:08:20.943117 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d52bb9-2efc-42fa-a2c9-75f671775895" containerName="nova-api-log" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.943207 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d52bb9-2efc-42fa-a2c9-75f671775895" containerName="nova-api-log" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.943420 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9d52bb9-2efc-42fa-a2c9-75f671775895" containerName="nova-api-log" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.943501 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9d52bb9-2efc-42fa-a2c9-75f671775895" containerName="nova-api-api" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.944624 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.948160 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.948415 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.949416 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 14:08:20 crc kubenswrapper[4914]: I0127 14:08:20.952309 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.076060 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-config-data\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.076139 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.076256 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07732def-2fc5-476b-8501-ade6b4496dfb-logs\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.076338 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nrp5\" (UniqueName: \"kubernetes.io/projected/07732def-2fc5-476b-8501-ade6b4496dfb-kube-api-access-4nrp5\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.076361 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.076379 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-public-tls-certs\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.177919 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-public-tls-certs\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.178003 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-config-data\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.178036 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.178154 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07732def-2fc5-476b-8501-ade6b4496dfb-logs\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.178227 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nrp5\" (UniqueName: \"kubernetes.io/projected/07732def-2fc5-476b-8501-ade6b4496dfb-kube-api-access-4nrp5\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.178247 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.179088 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07732def-2fc5-476b-8501-ade6b4496dfb-logs\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.181127 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.181450 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.182799 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-public-tls-certs\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.183397 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-config-data\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.194568 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nrp5\" (UniqueName: \"kubernetes.io/projected/07732def-2fc5-476b-8501-ade6b4496dfb-kube-api-access-4nrp5\") pod \"nova-api-0\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.290339 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.733105 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:21 crc kubenswrapper[4914]: W0127 14:08:21.738746 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07732def_2fc5_476b_8501_ade6b4496dfb.slice/crio-7da408e810170addf49ef4fda92170852c569286c85f840bba9c5fac7337de00 WatchSource:0}: Error finding container 7da408e810170addf49ef4fda92170852c569286c85f840bba9c5fac7337de00: Status 404 returned error can't find the container with id 7da408e810170addf49ef4fda92170852c569286c85f840bba9c5fac7337de00 Jan 27 14:08:21 crc kubenswrapper[4914]: I0127 14:08:21.876776 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07732def-2fc5-476b-8501-ade6b4496dfb","Type":"ContainerStarted","Data":"7da408e810170addf49ef4fda92170852c569286c85f840bba9c5fac7337de00"} Jan 27 14:08:22 crc kubenswrapper[4914]: I0127 14:08:22.182092 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 14:08:22 crc kubenswrapper[4914]: I0127 14:08:22.313688 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9d52bb9-2efc-42fa-a2c9-75f671775895" path="/var/lib/kubelet/pods/f9d52bb9-2efc-42fa-a2c9-75f671775895/volumes" Jan 27 14:08:22 crc kubenswrapper[4914]: I0127 14:08:22.890114 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07732def-2fc5-476b-8501-ade6b4496dfb","Type":"ContainerStarted","Data":"11e528309e1482298817ab2072d0263c5eb11b4a0ae34d7cf16644c47be88319"} Jan 27 14:08:22 crc kubenswrapper[4914]: I0127 14:08:22.890522 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07732def-2fc5-476b-8501-ade6b4496dfb","Type":"ContainerStarted","Data":"37f28d19f8ccb5e5eb5b10b0747bc02f33f454e69e1d7bc736debcdfc9b5838c"} Jan 27 14:08:22 crc kubenswrapper[4914]: I0127 14:08:22.924692 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.9246722529999998 podStartE2EDuration="2.924672253s" podCreationTimestamp="2026-01-27 14:08:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:08:22.914157914 +0000 UTC m=+1461.226508019" watchObservedRunningTime="2026-01-27 14:08:22.924672253 +0000 UTC m=+1461.237022338" Jan 27 14:08:24 crc kubenswrapper[4914]: I0127 14:08:24.522876 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:08:24 crc kubenswrapper[4914]: I0127 14:08:24.522926 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:08:27 crc kubenswrapper[4914]: I0127 14:08:27.181568 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 14:08:27 crc kubenswrapper[4914]: I0127 14:08:27.211587 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 14:08:27 crc kubenswrapper[4914]: I0127 14:08:27.959588 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 14:08:29 crc kubenswrapper[4914]: I0127 14:08:29.523144 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:08:29 crc kubenswrapper[4914]: I0127 14:08:29.523214 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:08:30 crc kubenswrapper[4914]: I0127 14:08:30.540103 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bd68e412-1f67-4c60-b79d-637c57123f0b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:08:30 crc kubenswrapper[4914]: I0127 14:08:30.540126 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bd68e412-1f67-4c60-b79d-637c57123f0b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:08:31 crc kubenswrapper[4914]: I0127 14:08:31.334707 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:08:31 crc kubenswrapper[4914]: I0127 14:08:31.335608 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:08:32 crc kubenswrapper[4914]: I0127 14:08:32.371142 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="07732def-2fc5-476b-8501-ade6b4496dfb" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:08:32 crc kubenswrapper[4914]: I0127 14:08:32.371178 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="07732def-2fc5-476b-8501-ade6b4496dfb" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:08:34 crc kubenswrapper[4914]: I0127 14:08:34.499403 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 14:08:39 crc kubenswrapper[4914]: I0127 14:08:39.527888 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 14:08:39 crc kubenswrapper[4914]: I0127 14:08:39.528421 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 14:08:39 crc kubenswrapper[4914]: I0127 14:08:39.533505 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 14:08:39 crc kubenswrapper[4914]: I0127 14:08:39.536422 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 14:08:41 crc kubenswrapper[4914]: I0127 14:08:41.296812 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:08:41 crc kubenswrapper[4914]: I0127 14:08:41.297993 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:08:41 crc kubenswrapper[4914]: I0127 14:08:41.302074 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:08:41 crc kubenswrapper[4914]: I0127 14:08:41.302890 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:08:42 crc kubenswrapper[4914]: I0127 14:08:42.075383 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:08:42 crc kubenswrapper[4914]: I0127 14:08:42.082802 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:08:43 crc kubenswrapper[4914]: I0127 14:08:43.901326 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:08:43 crc kubenswrapper[4914]: I0127 14:08:43.901788 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="e452ff2e-dbbd-484d-80b0-45883aa5fca3" containerName="nova-cell0-conductor-conductor" containerID="cri-o://e20bc525535dca84894bea9c8166d5febea6348255cb0f9d2fb922214417f693" gracePeriod=30 Jan 27 14:08:43 crc kubenswrapper[4914]: I0127 14:08:43.974646 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:08:43 crc kubenswrapper[4914]: I0127 14:08:43.975291 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bd68e412-1f67-4c60-b79d-637c57123f0b" containerName="nova-metadata-log" containerID="cri-o://9162de6a685299d2514f45a968e2c1694e05f953e2309c98ec13a20cd544547e" gracePeriod=30 Jan 27 14:08:43 crc kubenswrapper[4914]: I0127 14:08:43.975365 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bd68e412-1f67-4c60-b79d-637c57123f0b" containerName="nova-metadata-metadata" containerID="cri-o://32604d895b189f3ec330c7bea7a4260135a444c8cd0c64c3e811058bbe76ff75" gracePeriod=30 Jan 27 14:08:43 crc kubenswrapper[4914]: I0127 14:08:43.990997 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:08:43 crc kubenswrapper[4914]: I0127 14:08:43.991250 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b01a3b5a-39b3-450a-acc4-e76987f7f506" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://1f930b3b6b87a9825ee5e47f0bfd988c0aa4340fdb03506c942c0ae815089987" gracePeriod=30 Jan 27 14:08:44 crc kubenswrapper[4914]: I0127 14:08:44.003116 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:08:44 crc kubenswrapper[4914]: I0127 14:08:44.003370 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="64163376-81f8-423f-9d43-78fb10db0c61" containerName="nova-scheduler-scheduler" containerID="cri-o://9f01477867b9f7253cce742efe299f3ba03fa9e906dde0caf1d4c8aaee046e44" gracePeriod=30 Jan 27 14:08:44 crc kubenswrapper[4914]: I0127 14:08:44.064212 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:44 crc kubenswrapper[4914]: I0127 14:08:44.105498 4914 generic.go:334] "Generic (PLEG): container finished" podID="bd68e412-1f67-4c60-b79d-637c57123f0b" containerID="9162de6a685299d2514f45a968e2c1694e05f953e2309c98ec13a20cd544547e" exitCode=143 Jan 27 14:08:44 crc kubenswrapper[4914]: I0127 14:08:44.106282 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bd68e412-1f67-4c60-b79d-637c57123f0b","Type":"ContainerDied","Data":"9162de6a685299d2514f45a968e2c1694e05f953e2309c98ec13a20cd544547e"} Jan 27 14:08:44 crc kubenswrapper[4914]: I0127 14:08:44.908536 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.042075 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-combined-ca-bundle\") pod \"b01a3b5a-39b3-450a-acc4-e76987f7f506\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.042136 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ljxc\" (UniqueName: \"kubernetes.io/projected/b01a3b5a-39b3-450a-acc4-e76987f7f506-kube-api-access-9ljxc\") pod \"b01a3b5a-39b3-450a-acc4-e76987f7f506\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.042355 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-nova-novncproxy-tls-certs\") pod \"b01a3b5a-39b3-450a-acc4-e76987f7f506\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.042397 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-config-data\") pod \"b01a3b5a-39b3-450a-acc4-e76987f7f506\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.042500 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-vencrypt-tls-certs\") pod \"b01a3b5a-39b3-450a-acc4-e76987f7f506\" (UID: \"b01a3b5a-39b3-450a-acc4-e76987f7f506\") " Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.061102 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b01a3b5a-39b3-450a-acc4-e76987f7f506-kube-api-access-9ljxc" (OuterVolumeSpecName: "kube-api-access-9ljxc") pod "b01a3b5a-39b3-450a-acc4-e76987f7f506" (UID: "b01a3b5a-39b3-450a-acc4-e76987f7f506"). InnerVolumeSpecName "kube-api-access-9ljxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.097070 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b01a3b5a-39b3-450a-acc4-e76987f7f506" (UID: "b01a3b5a-39b3-450a-acc4-e76987f7f506"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.098975 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-config-data" (OuterVolumeSpecName: "config-data") pod "b01a3b5a-39b3-450a-acc4-e76987f7f506" (UID: "b01a3b5a-39b3-450a-acc4-e76987f7f506"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.115736 4914 generic.go:334] "Generic (PLEG): container finished" podID="b01a3b5a-39b3-450a-acc4-e76987f7f506" containerID="1f930b3b6b87a9825ee5e47f0bfd988c0aa4340fdb03506c942c0ae815089987" exitCode=0 Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.116110 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.116201 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b01a3b5a-39b3-450a-acc4-e76987f7f506","Type":"ContainerDied","Data":"1f930b3b6b87a9825ee5e47f0bfd988c0aa4340fdb03506c942c0ae815089987"} Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.116233 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b01a3b5a-39b3-450a-acc4-e76987f7f506","Type":"ContainerDied","Data":"e6dd11b7cd86357c85dfaf97ec593dbee71ac433b224e1ffc5e289ed57cef6be"} Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.116254 4914 scope.go:117] "RemoveContainer" containerID="1f930b3b6b87a9825ee5e47f0bfd988c0aa4340fdb03506c942c0ae815089987" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.116811 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="07732def-2fc5-476b-8501-ade6b4496dfb" containerName="nova-api-log" containerID="cri-o://37f28d19f8ccb5e5eb5b10b0747bc02f33f454e69e1d7bc736debcdfc9b5838c" gracePeriod=30 Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.116939 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="07732def-2fc5-476b-8501-ade6b4496dfb" containerName="nova-api-api" containerID="cri-o://11e528309e1482298817ab2072d0263c5eb11b4a0ae34d7cf16644c47be88319" gracePeriod=30 Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.122527 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "b01a3b5a-39b3-450a-acc4-e76987f7f506" (UID: "b01a3b5a-39b3-450a-acc4-e76987f7f506"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.145292 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ljxc\" (UniqueName: \"kubernetes.io/projected/b01a3b5a-39b3-450a-acc4-e76987f7f506-kube-api-access-9ljxc\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.145337 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.145351 4914 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.145362 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.163126 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "b01a3b5a-39b3-450a-acc4-e76987f7f506" (UID: "b01a3b5a-39b3-450a-acc4-e76987f7f506"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.177052 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.177311 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="d6038b9a-0f9f-4457-abd7-e4c71ef50128" containerName="nova-cell1-conductor-conductor" containerID="cri-o://7843a6997aac1d591965205b9fb55a0a9226bc9362b6b28aa2082b40f5339d96" gracePeriod=30 Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.214343 4914 scope.go:117] "RemoveContainer" containerID="1f930b3b6b87a9825ee5e47f0bfd988c0aa4340fdb03506c942c0ae815089987" Jan 27 14:08:45 crc kubenswrapper[4914]: E0127 14:08:45.214778 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f930b3b6b87a9825ee5e47f0bfd988c0aa4340fdb03506c942c0ae815089987\": container with ID starting with 1f930b3b6b87a9825ee5e47f0bfd988c0aa4340fdb03506c942c0ae815089987 not found: ID does not exist" containerID="1f930b3b6b87a9825ee5e47f0bfd988c0aa4340fdb03506c942c0ae815089987" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.214817 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f930b3b6b87a9825ee5e47f0bfd988c0aa4340fdb03506c942c0ae815089987"} err="failed to get container status \"1f930b3b6b87a9825ee5e47f0bfd988c0aa4340fdb03506c942c0ae815089987\": rpc error: code = NotFound desc = could not find container \"1f930b3b6b87a9825ee5e47f0bfd988c0aa4340fdb03506c942c0ae815089987\": container with ID starting with 1f930b3b6b87a9825ee5e47f0bfd988c0aa4340fdb03506c942c0ae815089987 not found: ID does not exist" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.246957 4914 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b01a3b5a-39b3-450a-acc4-e76987f7f506-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.457891 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.484892 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.500662 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:08:45 crc kubenswrapper[4914]: E0127 14:08:45.501259 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01a3b5a-39b3-450a-acc4-e76987f7f506" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.501283 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01a3b5a-39b3-450a-acc4-e76987f7f506" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.501510 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="b01a3b5a-39b3-450a-acc4-e76987f7f506" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.502452 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.509608 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.509926 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.510393 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.515630 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.654684 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjmdm\" (UniqueName: \"kubernetes.io/projected/986f8538-35e7-4c21-9bed-b79999a106f0-kube-api-access-bjmdm\") pod \"nova-cell1-novncproxy-0\" (UID: \"986f8538-35e7-4c21-9bed-b79999a106f0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.654732 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/986f8538-35e7-4c21-9bed-b79999a106f0-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"986f8538-35e7-4c21-9bed-b79999a106f0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.655031 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/986f8538-35e7-4c21-9bed-b79999a106f0-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"986f8538-35e7-4c21-9bed-b79999a106f0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.655248 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986f8538-35e7-4c21-9bed-b79999a106f0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"986f8538-35e7-4c21-9bed-b79999a106f0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.655476 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986f8538-35e7-4c21-9bed-b79999a106f0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"986f8538-35e7-4c21-9bed-b79999a106f0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.757206 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986f8538-35e7-4c21-9bed-b79999a106f0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"986f8538-35e7-4c21-9bed-b79999a106f0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.757327 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986f8538-35e7-4c21-9bed-b79999a106f0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"986f8538-35e7-4c21-9bed-b79999a106f0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.757411 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjmdm\" (UniqueName: \"kubernetes.io/projected/986f8538-35e7-4c21-9bed-b79999a106f0-kube-api-access-bjmdm\") pod \"nova-cell1-novncproxy-0\" (UID: \"986f8538-35e7-4c21-9bed-b79999a106f0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.757440 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/986f8538-35e7-4c21-9bed-b79999a106f0-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"986f8538-35e7-4c21-9bed-b79999a106f0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.757527 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/986f8538-35e7-4c21-9bed-b79999a106f0-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"986f8538-35e7-4c21-9bed-b79999a106f0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.761502 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/986f8538-35e7-4c21-9bed-b79999a106f0-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"986f8538-35e7-4c21-9bed-b79999a106f0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.761679 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986f8538-35e7-4c21-9bed-b79999a106f0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"986f8538-35e7-4c21-9bed-b79999a106f0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.762245 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986f8538-35e7-4c21-9bed-b79999a106f0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"986f8538-35e7-4c21-9bed-b79999a106f0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.763249 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/986f8538-35e7-4c21-9bed-b79999a106f0-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"986f8538-35e7-4c21-9bed-b79999a106f0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.778413 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjmdm\" (UniqueName: \"kubernetes.io/projected/986f8538-35e7-4c21-9bed-b79999a106f0-kube-api-access-bjmdm\") pod \"nova-cell1-novncproxy-0\" (UID: \"986f8538-35e7-4c21-9bed-b79999a106f0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:45 crc kubenswrapper[4914]: I0127 14:08:45.831068 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:46 crc kubenswrapper[4914]: I0127 14:08:46.140964 4914 generic.go:334] "Generic (PLEG): container finished" podID="07732def-2fc5-476b-8501-ade6b4496dfb" containerID="37f28d19f8ccb5e5eb5b10b0747bc02f33f454e69e1d7bc736debcdfc9b5838c" exitCode=143 Jan 27 14:08:46 crc kubenswrapper[4914]: I0127 14:08:46.141311 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07732def-2fc5-476b-8501-ade6b4496dfb","Type":"ContainerDied","Data":"37f28d19f8ccb5e5eb5b10b0747bc02f33f454e69e1d7bc736debcdfc9b5838c"} Jan 27 14:08:46 crc kubenswrapper[4914]: I0127 14:08:46.307267 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b01a3b5a-39b3-450a-acc4-e76987f7f506" path="/var/lib/kubelet/pods/b01a3b5a-39b3-450a-acc4-e76987f7f506/volumes" Jan 27 14:08:46 crc kubenswrapper[4914]: I0127 14:08:46.340349 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:08:46 crc kubenswrapper[4914]: E0127 14:08:46.522500 4914 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7843a6997aac1d591965205b9fb55a0a9226bc9362b6b28aa2082b40f5339d96" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 14:08:46 crc kubenswrapper[4914]: E0127 14:08:46.524080 4914 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7843a6997aac1d591965205b9fb55a0a9226bc9362b6b28aa2082b40f5339d96" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 14:08:46 crc kubenswrapper[4914]: E0127 14:08:46.525596 4914 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7843a6997aac1d591965205b9fb55a0a9226bc9362b6b28aa2082b40f5339d96" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 14:08:46 crc kubenswrapper[4914]: E0127 14:08:46.525644 4914 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="d6038b9a-0f9f-4457-abd7-e4c71ef50128" containerName="nova-cell1-conductor-conductor" Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.109933 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="bd68e412-1f67-4c60-b79d-637c57123f0b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": read tcp 10.217.0.2:37204->10.217.0.219:8775: read: connection reset by peer" Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.109957 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="bd68e412-1f67-4c60-b79d-637c57123f0b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": read tcp 10.217.0.2:37206->10.217.0.219:8775: read: connection reset by peer" Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.153286 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"986f8538-35e7-4c21-9bed-b79999a106f0","Type":"ContainerStarted","Data":"489d46f722dd39048d9ee13d58cf15f2a39c6746cdb381cde0b2d44fa615f783"} Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.153332 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"986f8538-35e7-4c21-9bed-b79999a106f0","Type":"ContainerStarted","Data":"dc8ee604dc1a3879f104f1925608c2113fc25badcc75e061ee22ee8f385e5d3f"} Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.170094 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.170072162 podStartE2EDuration="2.170072162s" podCreationTimestamp="2026-01-27 14:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:08:47.169058315 +0000 UTC m=+1485.481408410" watchObservedRunningTime="2026-01-27 14:08:47.170072162 +0000 UTC m=+1485.482422247" Jan 27 14:08:47 crc kubenswrapper[4914]: E0127 14:08:47.194387 4914 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f01477867b9f7253cce742efe299f3ba03fa9e906dde0caf1d4c8aaee046e44" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 14:08:47 crc kubenswrapper[4914]: E0127 14:08:47.195882 4914 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f01477867b9f7253cce742efe299f3ba03fa9e906dde0caf1d4c8aaee046e44" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 14:08:47 crc kubenswrapper[4914]: E0127 14:08:47.204934 4914 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f01477867b9f7253cce742efe299f3ba03fa9e906dde0caf1d4c8aaee046e44" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 14:08:47 crc kubenswrapper[4914]: E0127 14:08:47.205063 4914 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="64163376-81f8-423f-9d43-78fb10db0c61" containerName="nova-scheduler-scheduler" Jan 27 14:08:47 crc kubenswrapper[4914]: E0127 14:08:47.272242 4914 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e20bc525535dca84894bea9c8166d5febea6348255cb0f9d2fb922214417f693" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 14:08:47 crc kubenswrapper[4914]: E0127 14:08:47.273744 4914 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e20bc525535dca84894bea9c8166d5febea6348255cb0f9d2fb922214417f693" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 14:08:47 crc kubenswrapper[4914]: E0127 14:08:47.274809 4914 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e20bc525535dca84894bea9c8166d5febea6348255cb0f9d2fb922214417f693" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 14:08:47 crc kubenswrapper[4914]: E0127 14:08:47.274859 4914 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="e452ff2e-dbbd-484d-80b0-45883aa5fca3" containerName="nova-cell0-conductor-conductor" Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.568432 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.690728 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-nova-metadata-tls-certs\") pod \"bd68e412-1f67-4c60-b79d-637c57123f0b\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.691253 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-combined-ca-bundle\") pod \"bd68e412-1f67-4c60-b79d-637c57123f0b\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.691370 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2rd9\" (UniqueName: \"kubernetes.io/projected/bd68e412-1f67-4c60-b79d-637c57123f0b-kube-api-access-j2rd9\") pod \"bd68e412-1f67-4c60-b79d-637c57123f0b\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.691448 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd68e412-1f67-4c60-b79d-637c57123f0b-logs\") pod \"bd68e412-1f67-4c60-b79d-637c57123f0b\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.691616 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-config-data\") pod \"bd68e412-1f67-4c60-b79d-637c57123f0b\" (UID: \"bd68e412-1f67-4c60-b79d-637c57123f0b\") " Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.692106 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd68e412-1f67-4c60-b79d-637c57123f0b-logs" (OuterVolumeSpecName: "logs") pod "bd68e412-1f67-4c60-b79d-637c57123f0b" (UID: "bd68e412-1f67-4c60-b79d-637c57123f0b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.692394 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd68e412-1f67-4c60-b79d-637c57123f0b-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.697007 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd68e412-1f67-4c60-b79d-637c57123f0b-kube-api-access-j2rd9" (OuterVolumeSpecName: "kube-api-access-j2rd9") pod "bd68e412-1f67-4c60-b79d-637c57123f0b" (UID: "bd68e412-1f67-4c60-b79d-637c57123f0b"). InnerVolumeSpecName "kube-api-access-j2rd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.721819 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd68e412-1f67-4c60-b79d-637c57123f0b" (UID: "bd68e412-1f67-4c60-b79d-637c57123f0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.724187 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-config-data" (OuterVolumeSpecName: "config-data") pod "bd68e412-1f67-4c60-b79d-637c57123f0b" (UID: "bd68e412-1f67-4c60-b79d-637c57123f0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.767230 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "bd68e412-1f67-4c60-b79d-637c57123f0b" (UID: "bd68e412-1f67-4c60-b79d-637c57123f0b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.794287 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.794965 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2rd9\" (UniqueName: \"kubernetes.io/projected/bd68e412-1f67-4c60-b79d-637c57123f0b-kube-api-access-j2rd9\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.794996 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:47 crc kubenswrapper[4914]: I0127 14:08:47.795009 4914 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd68e412-1f67-4c60-b79d-637c57123f0b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.163809 4914 generic.go:334] "Generic (PLEG): container finished" podID="bd68e412-1f67-4c60-b79d-637c57123f0b" containerID="32604d895b189f3ec330c7bea7a4260135a444c8cd0c64c3e811058bbe76ff75" exitCode=0 Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.163907 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.163898 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bd68e412-1f67-4c60-b79d-637c57123f0b","Type":"ContainerDied","Data":"32604d895b189f3ec330c7bea7a4260135a444c8cd0c64c3e811058bbe76ff75"} Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.163990 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bd68e412-1f67-4c60-b79d-637c57123f0b","Type":"ContainerDied","Data":"4c7c80ea3af927b74a4cb4b8bbd71ea2040b4134afe4b159ebbb5864b87a5ce9"} Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.164019 4914 scope.go:117] "RemoveContainer" containerID="32604d895b189f3ec330c7bea7a4260135a444c8cd0c64c3e811058bbe76ff75" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.233342 4914 scope.go:117] "RemoveContainer" containerID="9162de6a685299d2514f45a968e2c1694e05f953e2309c98ec13a20cd544547e" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.253064 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.267592 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.272105 4914 scope.go:117] "RemoveContainer" containerID="32604d895b189f3ec330c7bea7a4260135a444c8cd0c64c3e811058bbe76ff75" Jan 27 14:08:48 crc kubenswrapper[4914]: E0127 14:08:48.272750 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32604d895b189f3ec330c7bea7a4260135a444c8cd0c64c3e811058bbe76ff75\": container with ID starting with 32604d895b189f3ec330c7bea7a4260135a444c8cd0c64c3e811058bbe76ff75 not found: ID does not exist" containerID="32604d895b189f3ec330c7bea7a4260135a444c8cd0c64c3e811058bbe76ff75" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.272805 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32604d895b189f3ec330c7bea7a4260135a444c8cd0c64c3e811058bbe76ff75"} err="failed to get container status \"32604d895b189f3ec330c7bea7a4260135a444c8cd0c64c3e811058bbe76ff75\": rpc error: code = NotFound desc = could not find container \"32604d895b189f3ec330c7bea7a4260135a444c8cd0c64c3e811058bbe76ff75\": container with ID starting with 32604d895b189f3ec330c7bea7a4260135a444c8cd0c64c3e811058bbe76ff75 not found: ID does not exist" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.272856 4914 scope.go:117] "RemoveContainer" containerID="9162de6a685299d2514f45a968e2c1694e05f953e2309c98ec13a20cd544547e" Jan 27 14:08:48 crc kubenswrapper[4914]: E0127 14:08:48.273255 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9162de6a685299d2514f45a968e2c1694e05f953e2309c98ec13a20cd544547e\": container with ID starting with 9162de6a685299d2514f45a968e2c1694e05f953e2309c98ec13a20cd544547e not found: ID does not exist" containerID="9162de6a685299d2514f45a968e2c1694e05f953e2309c98ec13a20cd544547e" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.273282 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9162de6a685299d2514f45a968e2c1694e05f953e2309c98ec13a20cd544547e"} err="failed to get container status \"9162de6a685299d2514f45a968e2c1694e05f953e2309c98ec13a20cd544547e\": rpc error: code = NotFound desc = could not find container \"9162de6a685299d2514f45a968e2c1694e05f953e2309c98ec13a20cd544547e\": container with ID starting with 9162de6a685299d2514f45a968e2c1694e05f953e2309c98ec13a20cd544547e not found: ID does not exist" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.275684 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:08:48 crc kubenswrapper[4914]: E0127 14:08:48.276121 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd68e412-1f67-4c60-b79d-637c57123f0b" containerName="nova-metadata-metadata" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.276142 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd68e412-1f67-4c60-b79d-637c57123f0b" containerName="nova-metadata-metadata" Jan 27 14:08:48 crc kubenswrapper[4914]: E0127 14:08:48.276169 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd68e412-1f67-4c60-b79d-637c57123f0b" containerName="nova-metadata-log" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.276177 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd68e412-1f67-4c60-b79d-637c57123f0b" containerName="nova-metadata-log" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.276418 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd68e412-1f67-4c60-b79d-637c57123f0b" containerName="nova-metadata-log" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.276447 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd68e412-1f67-4c60-b79d-637c57123f0b" containerName="nova-metadata-metadata" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.277611 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.279695 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.280030 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.308648 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd68e412-1f67-4c60-b79d-637c57123f0b" path="/var/lib/kubelet/pods/bd68e412-1f67-4c60-b79d-637c57123f0b/volumes" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.309272 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.437279 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3-config-data\") pod \"nova-metadata-0\" (UID: \"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3\") " pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.437337 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3-logs\") pod \"nova-metadata-0\" (UID: \"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3\") " pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.437385 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlvk2\" (UniqueName: \"kubernetes.io/projected/45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3-kube-api-access-mlvk2\") pod \"nova-metadata-0\" (UID: \"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3\") " pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.437552 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3\") " pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.437581 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3\") " pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.543018 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3-config-data\") pod \"nova-metadata-0\" (UID: \"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3\") " pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.543313 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3-logs\") pod \"nova-metadata-0\" (UID: \"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3\") " pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.543402 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlvk2\" (UniqueName: \"kubernetes.io/projected/45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3-kube-api-access-mlvk2\") pod \"nova-metadata-0\" (UID: \"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3\") " pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.543589 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3\") " pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.543666 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3\") " pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.544114 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3-logs\") pod \"nova-metadata-0\" (UID: \"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3\") " pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.547984 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3\") " pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.548589 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3-config-data\") pod \"nova-metadata-0\" (UID: \"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3\") " pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.557236 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3\") " pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.569984 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlvk2\" (UniqueName: \"kubernetes.io/projected/45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3-kube-api-access-mlvk2\") pod \"nova-metadata-0\" (UID: \"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3\") " pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.597203 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:08:48 crc kubenswrapper[4914]: I0127 14:08:48.925176 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.055708 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-combined-ca-bundle\") pod \"07732def-2fc5-476b-8501-ade6b4496dfb\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.055794 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nrp5\" (UniqueName: \"kubernetes.io/projected/07732def-2fc5-476b-8501-ade6b4496dfb-kube-api-access-4nrp5\") pod \"07732def-2fc5-476b-8501-ade6b4496dfb\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.056055 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-config-data\") pod \"07732def-2fc5-476b-8501-ade6b4496dfb\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.056168 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-public-tls-certs\") pod \"07732def-2fc5-476b-8501-ade6b4496dfb\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.056209 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07732def-2fc5-476b-8501-ade6b4496dfb-logs\") pod \"07732def-2fc5-476b-8501-ade6b4496dfb\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.056229 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-internal-tls-certs\") pod \"07732def-2fc5-476b-8501-ade6b4496dfb\" (UID: \"07732def-2fc5-476b-8501-ade6b4496dfb\") " Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.056936 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07732def-2fc5-476b-8501-ade6b4496dfb-logs" (OuterVolumeSpecName: "logs") pod "07732def-2fc5-476b-8501-ade6b4496dfb" (UID: "07732def-2fc5-476b-8501-ade6b4496dfb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.061780 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07732def-2fc5-476b-8501-ade6b4496dfb-kube-api-access-4nrp5" (OuterVolumeSpecName: "kube-api-access-4nrp5") pod "07732def-2fc5-476b-8501-ade6b4496dfb" (UID: "07732def-2fc5-476b-8501-ade6b4496dfb"). InnerVolumeSpecName "kube-api-access-4nrp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.091498 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-config-data" (OuterVolumeSpecName: "config-data") pod "07732def-2fc5-476b-8501-ade6b4496dfb" (UID: "07732def-2fc5-476b-8501-ade6b4496dfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.098512 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07732def-2fc5-476b-8501-ade6b4496dfb" (UID: "07732def-2fc5-476b-8501-ade6b4496dfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.127410 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.127968 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "07732def-2fc5-476b-8501-ade6b4496dfb" (UID: "07732def-2fc5-476b-8501-ade6b4496dfb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:49 crc kubenswrapper[4914]: W0127 14:08:49.129855 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45eaeef7_d320_4cb2_9d21_ff2c07fc5fa3.slice/crio-50033fdcb0cb8a239de8411b51136ffdeaf6e3e6c9e99146dfaeeac2764da267 WatchSource:0}: Error finding container 50033fdcb0cb8a239de8411b51136ffdeaf6e3e6c9e99146dfaeeac2764da267: Status 404 returned error can't find the container with id 50033fdcb0cb8a239de8411b51136ffdeaf6e3e6c9e99146dfaeeac2764da267 Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.130942 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "07732def-2fc5-476b-8501-ade6b4496dfb" (UID: "07732def-2fc5-476b-8501-ade6b4496dfb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.158877 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.159154 4914 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.159233 4914 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07732def-2fc5-476b-8501-ade6b4496dfb-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.159313 4914 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.159384 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07732def-2fc5-476b-8501-ade6b4496dfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.159451 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nrp5\" (UniqueName: \"kubernetes.io/projected/07732def-2fc5-476b-8501-ade6b4496dfb-kube-api-access-4nrp5\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.179604 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3","Type":"ContainerStarted","Data":"50033fdcb0cb8a239de8411b51136ffdeaf6e3e6c9e99146dfaeeac2764da267"} Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.182682 4914 generic.go:334] "Generic (PLEG): container finished" podID="07732def-2fc5-476b-8501-ade6b4496dfb" containerID="11e528309e1482298817ab2072d0263c5eb11b4a0ae34d7cf16644c47be88319" exitCode=0 Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.182863 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07732def-2fc5-476b-8501-ade6b4496dfb","Type":"ContainerDied","Data":"11e528309e1482298817ab2072d0263c5eb11b4a0ae34d7cf16644c47be88319"} Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.183097 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07732def-2fc5-476b-8501-ade6b4496dfb","Type":"ContainerDied","Data":"7da408e810170addf49ef4fda92170852c569286c85f840bba9c5fac7337de00"} Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.183183 4914 scope.go:117] "RemoveContainer" containerID="11e528309e1482298817ab2072d0263c5eb11b4a0ae34d7cf16644c47be88319" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.182924 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.348156 4914 scope.go:117] "RemoveContainer" containerID="37f28d19f8ccb5e5eb5b10b0747bc02f33f454e69e1d7bc736debcdfc9b5838c" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.371429 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.381814 4914 scope.go:117] "RemoveContainer" containerID="11e528309e1482298817ab2072d0263c5eb11b4a0ae34d7cf16644c47be88319" Jan 27 14:08:49 crc kubenswrapper[4914]: E0127 14:08:49.384293 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11e528309e1482298817ab2072d0263c5eb11b4a0ae34d7cf16644c47be88319\": container with ID starting with 11e528309e1482298817ab2072d0263c5eb11b4a0ae34d7cf16644c47be88319 not found: ID does not exist" containerID="11e528309e1482298817ab2072d0263c5eb11b4a0ae34d7cf16644c47be88319" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.384333 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11e528309e1482298817ab2072d0263c5eb11b4a0ae34d7cf16644c47be88319"} err="failed to get container status \"11e528309e1482298817ab2072d0263c5eb11b4a0ae34d7cf16644c47be88319\": rpc error: code = NotFound desc = could not find container \"11e528309e1482298817ab2072d0263c5eb11b4a0ae34d7cf16644c47be88319\": container with ID starting with 11e528309e1482298817ab2072d0263c5eb11b4a0ae34d7cf16644c47be88319 not found: ID does not exist" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.384358 4914 scope.go:117] "RemoveContainer" containerID="37f28d19f8ccb5e5eb5b10b0747bc02f33f454e69e1d7bc736debcdfc9b5838c" Jan 27 14:08:49 crc kubenswrapper[4914]: E0127 14:08:49.384659 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37f28d19f8ccb5e5eb5b10b0747bc02f33f454e69e1d7bc736debcdfc9b5838c\": container with ID starting with 37f28d19f8ccb5e5eb5b10b0747bc02f33f454e69e1d7bc736debcdfc9b5838c not found: ID does not exist" containerID="37f28d19f8ccb5e5eb5b10b0747bc02f33f454e69e1d7bc736debcdfc9b5838c" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.384715 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37f28d19f8ccb5e5eb5b10b0747bc02f33f454e69e1d7bc736debcdfc9b5838c"} err="failed to get container status \"37f28d19f8ccb5e5eb5b10b0747bc02f33f454e69e1d7bc736debcdfc9b5838c\": rpc error: code = NotFound desc = could not find container \"37f28d19f8ccb5e5eb5b10b0747bc02f33f454e69e1d7bc736debcdfc9b5838c\": container with ID starting with 37f28d19f8ccb5e5eb5b10b0747bc02f33f454e69e1d7bc736debcdfc9b5838c not found: ID does not exist" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.387127 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.404320 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:49 crc kubenswrapper[4914]: E0127 14:08:49.404814 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07732def-2fc5-476b-8501-ade6b4496dfb" containerName="nova-api-api" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.404856 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="07732def-2fc5-476b-8501-ade6b4496dfb" containerName="nova-api-api" Jan 27 14:08:49 crc kubenswrapper[4914]: E0127 14:08:49.404892 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07732def-2fc5-476b-8501-ade6b4496dfb" containerName="nova-api-log" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.404900 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="07732def-2fc5-476b-8501-ade6b4496dfb" containerName="nova-api-log" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.405080 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="07732def-2fc5-476b-8501-ade6b4496dfb" containerName="nova-api-api" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.405093 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="07732def-2fc5-476b-8501-ade6b4496dfb" containerName="nova-api-log" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.406033 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.413570 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.413598 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.413950 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.414309 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.568901 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a546639f-94d3-43dc-8591-a6444b2a2150-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.569381 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a546639f-94d3-43dc-8591-a6444b2a2150-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.569528 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a546639f-94d3-43dc-8591-a6444b2a2150-logs\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.569890 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzqb7\" (UniqueName: \"kubernetes.io/projected/a546639f-94d3-43dc-8591-a6444b2a2150-kube-api-access-zzqb7\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.570137 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a546639f-94d3-43dc-8591-a6444b2a2150-public-tls-certs\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.570278 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a546639f-94d3-43dc-8591-a6444b2a2150-config-data\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.675372 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a546639f-94d3-43dc-8591-a6444b2a2150-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.675439 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a546639f-94d3-43dc-8591-a6444b2a2150-logs\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.675563 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzqb7\" (UniqueName: \"kubernetes.io/projected/a546639f-94d3-43dc-8591-a6444b2a2150-kube-api-access-zzqb7\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.675628 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a546639f-94d3-43dc-8591-a6444b2a2150-public-tls-certs\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.675674 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a546639f-94d3-43dc-8591-a6444b2a2150-config-data\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.675735 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a546639f-94d3-43dc-8591-a6444b2a2150-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.676429 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a546639f-94d3-43dc-8591-a6444b2a2150-logs\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.681585 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a546639f-94d3-43dc-8591-a6444b2a2150-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.684043 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a546639f-94d3-43dc-8591-a6444b2a2150-public-tls-certs\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.684692 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a546639f-94d3-43dc-8591-a6444b2a2150-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.684715 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a546639f-94d3-43dc-8591-a6444b2a2150-config-data\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.686468 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.698320 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzqb7\" (UniqueName: \"kubernetes.io/projected/a546639f-94d3-43dc-8591-a6444b2a2150-kube-api-access-zzqb7\") pod \"nova-api-0\" (UID: \"a546639f-94d3-43dc-8591-a6444b2a2150\") " pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.801506 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.878413 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5kfc\" (UniqueName: \"kubernetes.io/projected/64163376-81f8-423f-9d43-78fb10db0c61-kube-api-access-q5kfc\") pod \"64163376-81f8-423f-9d43-78fb10db0c61\" (UID: \"64163376-81f8-423f-9d43-78fb10db0c61\") " Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.878567 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64163376-81f8-423f-9d43-78fb10db0c61-config-data\") pod \"64163376-81f8-423f-9d43-78fb10db0c61\" (UID: \"64163376-81f8-423f-9d43-78fb10db0c61\") " Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.878791 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64163376-81f8-423f-9d43-78fb10db0c61-combined-ca-bundle\") pod \"64163376-81f8-423f-9d43-78fb10db0c61\" (UID: \"64163376-81f8-423f-9d43-78fb10db0c61\") " Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.883952 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64163376-81f8-423f-9d43-78fb10db0c61-kube-api-access-q5kfc" (OuterVolumeSpecName: "kube-api-access-q5kfc") pod "64163376-81f8-423f-9d43-78fb10db0c61" (UID: "64163376-81f8-423f-9d43-78fb10db0c61"). InnerVolumeSpecName "kube-api-access-q5kfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.906999 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64163376-81f8-423f-9d43-78fb10db0c61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64163376-81f8-423f-9d43-78fb10db0c61" (UID: "64163376-81f8-423f-9d43-78fb10db0c61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.907687 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64163376-81f8-423f-9d43-78fb10db0c61-config-data" (OuterVolumeSpecName: "config-data") pod "64163376-81f8-423f-9d43-78fb10db0c61" (UID: "64163376-81f8-423f-9d43-78fb10db0c61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.981746 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64163376-81f8-423f-9d43-78fb10db0c61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.981796 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5kfc\" (UniqueName: \"kubernetes.io/projected/64163376-81f8-423f-9d43-78fb10db0c61-kube-api-access-q5kfc\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:49 crc kubenswrapper[4914]: I0127 14:08:49.981815 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64163376-81f8-423f-9d43-78fb10db0c61-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.198669 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3","Type":"ContainerStarted","Data":"d42dcd6046dc9f284871fd21925f097962511197785a84c39b2cb59167848d03"} Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.199303 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3","Type":"ContainerStarted","Data":"28782a3ebd232930f58436919136ca8ae62b199592fbe802fca267c293141ed4"} Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.200499 4914 generic.go:334] "Generic (PLEG): container finished" podID="64163376-81f8-423f-9d43-78fb10db0c61" containerID="9f01477867b9f7253cce742efe299f3ba03fa9e906dde0caf1d4c8aaee046e44" exitCode=0 Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.200550 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.200581 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"64163376-81f8-423f-9d43-78fb10db0c61","Type":"ContainerDied","Data":"9f01477867b9f7253cce742efe299f3ba03fa9e906dde0caf1d4c8aaee046e44"} Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.200646 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"64163376-81f8-423f-9d43-78fb10db0c61","Type":"ContainerDied","Data":"ba3158515f565709a07131333748df124c9a1c0ddea0036d8abe59379a749f02"} Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.200670 4914 scope.go:117] "RemoveContainer" containerID="9f01477867b9f7253cce742efe299f3ba03fa9e906dde0caf1d4c8aaee046e44" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.217722 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.217705217 podStartE2EDuration="2.217705217s" podCreationTimestamp="2026-01-27 14:08:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:08:50.214168949 +0000 UTC m=+1488.526519044" watchObservedRunningTime="2026-01-27 14:08:50.217705217 +0000 UTC m=+1488.530055302" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.227037 4914 scope.go:117] "RemoveContainer" containerID="9f01477867b9f7253cce742efe299f3ba03fa9e906dde0caf1d4c8aaee046e44" Jan 27 14:08:50 crc kubenswrapper[4914]: E0127 14:08:50.227426 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f01477867b9f7253cce742efe299f3ba03fa9e906dde0caf1d4c8aaee046e44\": container with ID starting with 9f01477867b9f7253cce742efe299f3ba03fa9e906dde0caf1d4c8aaee046e44 not found: ID does not exist" containerID="9f01477867b9f7253cce742efe299f3ba03fa9e906dde0caf1d4c8aaee046e44" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.227486 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f01477867b9f7253cce742efe299f3ba03fa9e906dde0caf1d4c8aaee046e44"} err="failed to get container status \"9f01477867b9f7253cce742efe299f3ba03fa9e906dde0caf1d4c8aaee046e44\": rpc error: code = NotFound desc = could not find container \"9f01477867b9f7253cce742efe299f3ba03fa9e906dde0caf1d4c8aaee046e44\": container with ID starting with 9f01477867b9f7253cce742efe299f3ba03fa9e906dde0caf1d4c8aaee046e44 not found: ID does not exist" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.251899 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.258367 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.280892 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:08:50 crc kubenswrapper[4914]: E0127 14:08:50.281363 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64163376-81f8-423f-9d43-78fb10db0c61" containerName="nova-scheduler-scheduler" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.281382 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="64163376-81f8-423f-9d43-78fb10db0c61" containerName="nova-scheduler-scheduler" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.281559 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="64163376-81f8-423f-9d43-78fb10db0c61" containerName="nova-scheduler-scheduler" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.282227 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.286902 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.340005 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07732def-2fc5-476b-8501-ade6b4496dfb" path="/var/lib/kubelet/pods/07732def-2fc5-476b-8501-ade6b4496dfb/volumes" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.341202 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64163376-81f8-423f-9d43-78fb10db0c61" path="/var/lib/kubelet/pods/64163376-81f8-423f-9d43-78fb10db0c61/volumes" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.341985 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.342030 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.389480 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d3c192-12c7-44a2-8100-5307d6a9bb9d-config-data\") pod \"nova-scheduler-0\" (UID: \"b5d3c192-12c7-44a2-8100-5307d6a9bb9d\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.389697 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjvld\" (UniqueName: \"kubernetes.io/projected/b5d3c192-12c7-44a2-8100-5307d6a9bb9d-kube-api-access-xjvld\") pod \"nova-scheduler-0\" (UID: \"b5d3c192-12c7-44a2-8100-5307d6a9bb9d\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.389724 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d3c192-12c7-44a2-8100-5307d6a9bb9d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b5d3c192-12c7-44a2-8100-5307d6a9bb9d\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.491398 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjvld\" (UniqueName: \"kubernetes.io/projected/b5d3c192-12c7-44a2-8100-5307d6a9bb9d-kube-api-access-xjvld\") pod \"nova-scheduler-0\" (UID: \"b5d3c192-12c7-44a2-8100-5307d6a9bb9d\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.491461 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d3c192-12c7-44a2-8100-5307d6a9bb9d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b5d3c192-12c7-44a2-8100-5307d6a9bb9d\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.491531 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d3c192-12c7-44a2-8100-5307d6a9bb9d-config-data\") pod \"nova-scheduler-0\" (UID: \"b5d3c192-12c7-44a2-8100-5307d6a9bb9d\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.497690 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d3c192-12c7-44a2-8100-5307d6a9bb9d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b5d3c192-12c7-44a2-8100-5307d6a9bb9d\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.501400 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d3c192-12c7-44a2-8100-5307d6a9bb9d-config-data\") pod \"nova-scheduler-0\" (UID: \"b5d3c192-12c7-44a2-8100-5307d6a9bb9d\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.507382 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjvld\" (UniqueName: \"kubernetes.io/projected/b5d3c192-12c7-44a2-8100-5307d6a9bb9d-kube-api-access-xjvld\") pod \"nova-scheduler-0\" (UID: \"b5d3c192-12c7-44a2-8100-5307d6a9bb9d\") " pod="openstack/nova-scheduler-0" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.601506 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.831213 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:50 crc kubenswrapper[4914]: I0127 14:08:50.972155 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.124397 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6038b9a-0f9f-4457-abd7-e4c71ef50128-config-data\") pod \"d6038b9a-0f9f-4457-abd7-e4c71ef50128\" (UID: \"d6038b9a-0f9f-4457-abd7-e4c71ef50128\") " Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.124469 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6038b9a-0f9f-4457-abd7-e4c71ef50128-combined-ca-bundle\") pod \"d6038b9a-0f9f-4457-abd7-e4c71ef50128\" (UID: \"d6038b9a-0f9f-4457-abd7-e4c71ef50128\") " Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.124579 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv4gf\" (UniqueName: \"kubernetes.io/projected/d6038b9a-0f9f-4457-abd7-e4c71ef50128-kube-api-access-pv4gf\") pod \"d6038b9a-0f9f-4457-abd7-e4c71ef50128\" (UID: \"d6038b9a-0f9f-4457-abd7-e4c71ef50128\") " Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.138146 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6038b9a-0f9f-4457-abd7-e4c71ef50128-kube-api-access-pv4gf" (OuterVolumeSpecName: "kube-api-access-pv4gf") pod "d6038b9a-0f9f-4457-abd7-e4c71ef50128" (UID: "d6038b9a-0f9f-4457-abd7-e4c71ef50128"). InnerVolumeSpecName "kube-api-access-pv4gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.162532 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.174792 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6038b9a-0f9f-4457-abd7-e4c71ef50128-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6038b9a-0f9f-4457-abd7-e4c71ef50128" (UID: "d6038b9a-0f9f-4457-abd7-e4c71ef50128"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.174923 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6038b9a-0f9f-4457-abd7-e4c71ef50128-config-data" (OuterVolumeSpecName: "config-data") pod "d6038b9a-0f9f-4457-abd7-e4c71ef50128" (UID: "d6038b9a-0f9f-4457-abd7-e4c71ef50128"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.218218 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a546639f-94d3-43dc-8591-a6444b2a2150","Type":"ContainerStarted","Data":"de58c52f81e5325e29d833076097f748217a3adcca7fbb1d5868e17c5523ceac"} Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.218520 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a546639f-94d3-43dc-8591-a6444b2a2150","Type":"ContainerStarted","Data":"163c4b356cee04d57de8c7d4e0050babe067013809778457224276e8b27ea2d9"} Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.218533 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a546639f-94d3-43dc-8591-a6444b2a2150","Type":"ContainerStarted","Data":"5afff53175017c2a439cdc7be51dc53873edf16c602a60c1ff0feaac48916c12"} Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.221528 4914 generic.go:334] "Generic (PLEG): container finished" podID="d6038b9a-0f9f-4457-abd7-e4c71ef50128" containerID="7843a6997aac1d591965205b9fb55a0a9226bc9362b6b28aa2082b40f5339d96" exitCode=0 Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.221564 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.221670 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d6038b9a-0f9f-4457-abd7-e4c71ef50128","Type":"ContainerDied","Data":"7843a6997aac1d591965205b9fb55a0a9226bc9362b6b28aa2082b40f5339d96"} Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.221726 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d6038b9a-0f9f-4457-abd7-e4c71ef50128","Type":"ContainerDied","Data":"60c0451478eb1cd26edf775a7799c2a4a885c788345371d490c736b4cc9da227"} Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.221746 4914 scope.go:117] "RemoveContainer" containerID="7843a6997aac1d591965205b9fb55a0a9226bc9362b6b28aa2082b40f5339d96" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.227216 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b5d3c192-12c7-44a2-8100-5307d6a9bb9d","Type":"ContainerStarted","Data":"dc978ffbc74866925bd88c927473fd1584e370f04f1c83f0300e1cf253239662"} Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.227415 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6038b9a-0f9f-4457-abd7-e4c71ef50128-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.227450 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6038b9a-0f9f-4457-abd7-e4c71ef50128-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.227467 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv4gf\" (UniqueName: \"kubernetes.io/projected/d6038b9a-0f9f-4457-abd7-e4c71ef50128-kube-api-access-pv4gf\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.237467 4914 generic.go:334] "Generic (PLEG): container finished" podID="e452ff2e-dbbd-484d-80b0-45883aa5fca3" containerID="e20bc525535dca84894bea9c8166d5febea6348255cb0f9d2fb922214417f693" exitCode=0 Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.238334 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e452ff2e-dbbd-484d-80b0-45883aa5fca3","Type":"ContainerDied","Data":"e20bc525535dca84894bea9c8166d5febea6348255cb0f9d2fb922214417f693"} Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.238364 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e452ff2e-dbbd-484d-80b0-45883aa5fca3","Type":"ContainerDied","Data":"c9858480c6607043a1d248d72a8ccf387bf2ebe6328dc7d241f52f095e8f6eee"} Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.238376 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9858480c6607043a1d248d72a8ccf387bf2ebe6328dc7d241f52f095e8f6eee" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.248682 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.24866313 podStartE2EDuration="2.24866313s" podCreationTimestamp="2026-01-27 14:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:08:51.240463234 +0000 UTC m=+1489.552813319" watchObservedRunningTime="2026-01-27 14:08:51.24866313 +0000 UTC m=+1489.561013215" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.349902 4914 scope.go:117] "RemoveContainer" containerID="7843a6997aac1d591965205b9fb55a0a9226bc9362b6b28aa2082b40f5339d96" Jan 27 14:08:51 crc kubenswrapper[4914]: E0127 14:08:51.351209 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7843a6997aac1d591965205b9fb55a0a9226bc9362b6b28aa2082b40f5339d96\": container with ID starting with 7843a6997aac1d591965205b9fb55a0a9226bc9362b6b28aa2082b40f5339d96 not found: ID does not exist" containerID="7843a6997aac1d591965205b9fb55a0a9226bc9362b6b28aa2082b40f5339d96" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.351251 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7843a6997aac1d591965205b9fb55a0a9226bc9362b6b28aa2082b40f5339d96"} err="failed to get container status \"7843a6997aac1d591965205b9fb55a0a9226bc9362b6b28aa2082b40f5339d96\": rpc error: code = NotFound desc = could not find container \"7843a6997aac1d591965205b9fb55a0a9226bc9362b6b28aa2082b40f5339d96\": container with ID starting with 7843a6997aac1d591965205b9fb55a0a9226bc9362b6b28aa2082b40f5339d96 not found: ID does not exist" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.370885 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.371856 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.393756 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.407988 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:08:51 crc kubenswrapper[4914]: E0127 14:08:51.408458 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e452ff2e-dbbd-484d-80b0-45883aa5fca3" containerName="nova-cell0-conductor-conductor" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.408477 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="e452ff2e-dbbd-484d-80b0-45883aa5fca3" containerName="nova-cell0-conductor-conductor" Jan 27 14:08:51 crc kubenswrapper[4914]: E0127 14:08:51.408504 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6038b9a-0f9f-4457-abd7-e4c71ef50128" containerName="nova-cell1-conductor-conductor" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.408513 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6038b9a-0f9f-4457-abd7-e4c71ef50128" containerName="nova-cell1-conductor-conductor" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.408664 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="e452ff2e-dbbd-484d-80b0-45883aa5fca3" containerName="nova-cell0-conductor-conductor" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.408698 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6038b9a-0f9f-4457-abd7-e4c71ef50128" containerName="nova-cell1-conductor-conductor" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.409421 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.411660 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.426347 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.534018 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqqgq\" (UniqueName: \"kubernetes.io/projected/e452ff2e-dbbd-484d-80b0-45883aa5fca3-kube-api-access-pqqgq\") pod \"e452ff2e-dbbd-484d-80b0-45883aa5fca3\" (UID: \"e452ff2e-dbbd-484d-80b0-45883aa5fca3\") " Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.534148 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e452ff2e-dbbd-484d-80b0-45883aa5fca3-config-data\") pod \"e452ff2e-dbbd-484d-80b0-45883aa5fca3\" (UID: \"e452ff2e-dbbd-484d-80b0-45883aa5fca3\") " Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.534593 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e452ff2e-dbbd-484d-80b0-45883aa5fca3-combined-ca-bundle\") pod \"e452ff2e-dbbd-484d-80b0-45883aa5fca3\" (UID: \"e452ff2e-dbbd-484d-80b0-45883aa5fca3\") " Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.535122 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b4c18be-72eb-456f-9a55-eafc2cb451d0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3b4c18be-72eb-456f-9a55-eafc2cb451d0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.535195 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4c18be-72eb-456f-9a55-eafc2cb451d0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3b4c18be-72eb-456f-9a55-eafc2cb451d0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.536353 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mblbv\" (UniqueName: \"kubernetes.io/projected/3b4c18be-72eb-456f-9a55-eafc2cb451d0-kube-api-access-mblbv\") pod \"nova-cell1-conductor-0\" (UID: \"3b4c18be-72eb-456f-9a55-eafc2cb451d0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.548705 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e452ff2e-dbbd-484d-80b0-45883aa5fca3-kube-api-access-pqqgq" (OuterVolumeSpecName: "kube-api-access-pqqgq") pod "e452ff2e-dbbd-484d-80b0-45883aa5fca3" (UID: "e452ff2e-dbbd-484d-80b0-45883aa5fca3"). InnerVolumeSpecName "kube-api-access-pqqgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.560549 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e452ff2e-dbbd-484d-80b0-45883aa5fca3-config-data" (OuterVolumeSpecName: "config-data") pod "e452ff2e-dbbd-484d-80b0-45883aa5fca3" (UID: "e452ff2e-dbbd-484d-80b0-45883aa5fca3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.574759 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e452ff2e-dbbd-484d-80b0-45883aa5fca3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e452ff2e-dbbd-484d-80b0-45883aa5fca3" (UID: "e452ff2e-dbbd-484d-80b0-45883aa5fca3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.638052 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b4c18be-72eb-456f-9a55-eafc2cb451d0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3b4c18be-72eb-456f-9a55-eafc2cb451d0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.638384 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4c18be-72eb-456f-9a55-eafc2cb451d0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3b4c18be-72eb-456f-9a55-eafc2cb451d0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.638504 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mblbv\" (UniqueName: \"kubernetes.io/projected/3b4c18be-72eb-456f-9a55-eafc2cb451d0-kube-api-access-mblbv\") pod \"nova-cell1-conductor-0\" (UID: \"3b4c18be-72eb-456f-9a55-eafc2cb451d0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.638554 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqqgq\" (UniqueName: \"kubernetes.io/projected/e452ff2e-dbbd-484d-80b0-45883aa5fca3-kube-api-access-pqqgq\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.638565 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e452ff2e-dbbd-484d-80b0-45883aa5fca3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.638575 4914 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e452ff2e-dbbd-484d-80b0-45883aa5fca3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.641770 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b4c18be-72eb-456f-9a55-eafc2cb451d0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3b4c18be-72eb-456f-9a55-eafc2cb451d0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.642039 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4c18be-72eb-456f-9a55-eafc2cb451d0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3b4c18be-72eb-456f-9a55-eafc2cb451d0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.655792 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mblbv\" (UniqueName: \"kubernetes.io/projected/3b4c18be-72eb-456f-9a55-eafc2cb451d0-kube-api-access-mblbv\") pod \"nova-cell1-conductor-0\" (UID: \"3b4c18be-72eb-456f-9a55-eafc2cb451d0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:08:51 crc kubenswrapper[4914]: I0127 14:08:51.733819 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 14:08:52 crc kubenswrapper[4914]: I0127 14:08:52.244149 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:08:52 crc kubenswrapper[4914]: I0127 14:08:52.261617 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b5d3c192-12c7-44a2-8100-5307d6a9bb9d","Type":"ContainerStarted","Data":"14cee13cbb211fbf0d84809ac9d67d65ce2d64019885be000afe7d9a2b60cc42"} Jan 27 14:08:52 crc kubenswrapper[4914]: I0127 14:08:52.264071 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 14:08:52 crc kubenswrapper[4914]: I0127 14:08:52.287961 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.287938923 podStartE2EDuration="2.287938923s" podCreationTimestamp="2026-01-27 14:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:08:52.283514552 +0000 UTC m=+1490.595864637" watchObservedRunningTime="2026-01-27 14:08:52.287938923 +0000 UTC m=+1490.600289008" Jan 27 14:08:52 crc kubenswrapper[4914]: I0127 14:08:52.305256 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6038b9a-0f9f-4457-abd7-e4c71ef50128" path="/var/lib/kubelet/pods/d6038b9a-0f9f-4457-abd7-e4c71ef50128/volumes" Jan 27 14:08:53 crc kubenswrapper[4914]: I0127 14:08:53.155039 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:08:53 crc kubenswrapper[4914]: I0127 14:08:53.219557 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:08:53 crc kubenswrapper[4914]: I0127 14:08:53.273565 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3b4c18be-72eb-456f-9a55-eafc2cb451d0","Type":"ContainerStarted","Data":"a9418011af69607b476a7ab649ee43d88bf2dfef943484f85f26a34d985f296f"} Jan 27 14:08:53 crc kubenswrapper[4914]: I0127 14:08:53.273620 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3b4c18be-72eb-456f-9a55-eafc2cb451d0","Type":"ContainerStarted","Data":"cef9d6b0d68fd47b9558fb9babae6b29894f9e66dd2babd526b401c8694a4809"} Jan 27 14:08:53 crc kubenswrapper[4914]: I0127 14:08:53.302737 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.30270966 podStartE2EDuration="2.30270966s" podCreationTimestamp="2026-01-27 14:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:08:53.293413265 +0000 UTC m=+1491.605763350" watchObservedRunningTime="2026-01-27 14:08:53.30270966 +0000 UTC m=+1491.615059745" Jan 27 14:08:53 crc kubenswrapper[4914]: I0127 14:08:53.598249 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:08:53 crc kubenswrapper[4914]: I0127 14:08:53.598428 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:08:54 crc kubenswrapper[4914]: I0127 14:08:54.281240 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 14:08:55 crc kubenswrapper[4914]: I0127 14:08:55.602149 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 14:08:55 crc kubenswrapper[4914]: I0127 14:08:55.832020 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:56 crc kubenswrapper[4914]: I0127 14:08:56.050611 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:56 crc kubenswrapper[4914]: I0127 14:08:56.362817 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:08:57 crc kubenswrapper[4914]: I0127 14:08:57.382456 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="ead132f0-586e-402b-87bb-f7109396498d" containerName="rabbitmq" containerID="cri-o://bdbe6000f255755bfcd1719879a61fef64524a82bf179d78758b04a5d7b435b8" gracePeriod=604796 Jan 27 14:08:57 crc kubenswrapper[4914]: I0127 14:08:57.578499 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9dc0242e-0a62-4f1c-b978-00f6b2651429" containerName="rabbitmq" containerID="cri-o://72a0e7ef894659f6e574eb6b278ddb50e7091ad93d45138a52a28b1bd5765080" gracePeriod=604796 Jan 27 14:08:57 crc kubenswrapper[4914]: I0127 14:08:57.855393 4914 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ead132f0-586e-402b-87bb-f7109396498d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 27 14:08:58 crc kubenswrapper[4914]: I0127 14:08:58.598738 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:08:58 crc kubenswrapper[4914]: I0127 14:08:58.599157 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:08:59 crc kubenswrapper[4914]: I0127 14:08:59.611099 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:08:59 crc kubenswrapper[4914]: I0127 14:08:59.611160 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:08:59 crc kubenswrapper[4914]: I0127 14:08:59.802295 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:08:59 crc kubenswrapper[4914]: I0127 14:08:59.802346 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:09:00 crc kubenswrapper[4914]: I0127 14:09:00.601896 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 14:09:00 crc kubenswrapper[4914]: I0127 14:09:00.657610 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 14:09:00 crc kubenswrapper[4914]: I0127 14:09:00.815021 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a546639f-94d3-43dc-8591-a6444b2a2150" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.223:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:09:00 crc kubenswrapper[4914]: I0127 14:09:00.815329 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a546639f-94d3-43dc-8591-a6444b2a2150" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.223:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:09:01 crc kubenswrapper[4914]: I0127 14:09:01.377377 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 14:09:01 crc kubenswrapper[4914]: I0127 14:09:01.768353 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 14:09:03 crc kubenswrapper[4914]: I0127 14:09:03.939222 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.015196 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-server-conf\") pod \"ead132f0-586e-402b-87bb-f7109396498d\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.015309 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ead132f0-586e-402b-87bb-f7109396498d-erlang-cookie-secret\") pod \"ead132f0-586e-402b-87bb-f7109396498d\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.015377 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ead132f0-586e-402b-87bb-f7109396498d-pod-info\") pod \"ead132f0-586e-402b-87bb-f7109396498d\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.015399 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwjhk\" (UniqueName: \"kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-kube-api-access-vwjhk\") pod \"ead132f0-586e-402b-87bb-f7109396498d\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.015416 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ead132f0-586e-402b-87bb-f7109396498d\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.015484 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-confd\") pod \"ead132f0-586e-402b-87bb-f7109396498d\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.015508 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-plugins-conf\") pod \"ead132f0-586e-402b-87bb-f7109396498d\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.015650 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-config-data\") pod \"ead132f0-586e-402b-87bb-f7109396498d\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.015706 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-plugins\") pod \"ead132f0-586e-402b-87bb-f7109396498d\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.015726 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-erlang-cookie\") pod \"ead132f0-586e-402b-87bb-f7109396498d\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.015743 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-tls\") pod \"ead132f0-586e-402b-87bb-f7109396498d\" (UID: \"ead132f0-586e-402b-87bb-f7109396498d\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.020423 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ead132f0-586e-402b-87bb-f7109396498d" (UID: "ead132f0-586e-402b-87bb-f7109396498d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.022575 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ead132f0-586e-402b-87bb-f7109396498d-pod-info" (OuterVolumeSpecName: "pod-info") pod "ead132f0-586e-402b-87bb-f7109396498d" (UID: "ead132f0-586e-402b-87bb-f7109396498d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.022856 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ead132f0-586e-402b-87bb-f7109396498d" (UID: "ead132f0-586e-402b-87bb-f7109396498d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.023093 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ead132f0-586e-402b-87bb-f7109396498d" (UID: "ead132f0-586e-402b-87bb-f7109396498d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.036374 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "ead132f0-586e-402b-87bb-f7109396498d" (UID: "ead132f0-586e-402b-87bb-f7109396498d"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.036404 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-kube-api-access-vwjhk" (OuterVolumeSpecName: "kube-api-access-vwjhk") pod "ead132f0-586e-402b-87bb-f7109396498d" (UID: "ead132f0-586e-402b-87bb-f7109396498d"). InnerVolumeSpecName "kube-api-access-vwjhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.039102 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead132f0-586e-402b-87bb-f7109396498d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ead132f0-586e-402b-87bb-f7109396498d" (UID: "ead132f0-586e-402b-87bb-f7109396498d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.050223 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ead132f0-586e-402b-87bb-f7109396498d" (UID: "ead132f0-586e-402b-87bb-f7109396498d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.079299 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-config-data" (OuterVolumeSpecName: "config-data") pod "ead132f0-586e-402b-87bb-f7109396498d" (UID: "ead132f0-586e-402b-87bb-f7109396498d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.115585 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-server-conf" (OuterVolumeSpecName: "server-conf") pod "ead132f0-586e-402b-87bb-f7109396498d" (UID: "ead132f0-586e-402b-87bb-f7109396498d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.117863 4914 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ead132f0-586e-402b-87bb-f7109396498d-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.117902 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwjhk\" (UniqueName: \"kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-kube-api-access-vwjhk\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.117931 4914 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.117945 4914 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.117959 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.117971 4914 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.117983 4914 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.117994 4914 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.118004 4914 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ead132f0-586e-402b-87bb-f7109396498d-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.118015 4914 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ead132f0-586e-402b-87bb-f7109396498d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.119762 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.147422 4914 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.176731 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ead132f0-586e-402b-87bb-f7109396498d" (UID: "ead132f0-586e-402b-87bb-f7109396498d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.219231 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-server-conf\") pod \"9dc0242e-0a62-4f1c-b978-00f6b2651429\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.219284 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-confd\") pod \"9dc0242e-0a62-4f1c-b978-00f6b2651429\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.219316 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9dc0242e-0a62-4f1c-b978-00f6b2651429-erlang-cookie-secret\") pod \"9dc0242e-0a62-4f1c-b978-00f6b2651429\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.219430 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-tls\") pod \"9dc0242e-0a62-4f1c-b978-00f6b2651429\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.219471 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-plugins\") pod \"9dc0242e-0a62-4f1c-b978-00f6b2651429\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.219520 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-plugins-conf\") pod \"9dc0242e-0a62-4f1c-b978-00f6b2651429\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.219547 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"9dc0242e-0a62-4f1c-b978-00f6b2651429\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.219590 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9dc0242e-0a62-4f1c-b978-00f6b2651429-pod-info\") pod \"9dc0242e-0a62-4f1c-b978-00f6b2651429\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.219719 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-erlang-cookie\") pod \"9dc0242e-0a62-4f1c-b978-00f6b2651429\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.219761 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-config-data\") pod \"9dc0242e-0a62-4f1c-b978-00f6b2651429\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.219817 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqrm8\" (UniqueName: \"kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-kube-api-access-lqrm8\") pod \"9dc0242e-0a62-4f1c-b978-00f6b2651429\" (UID: \"9dc0242e-0a62-4f1c-b978-00f6b2651429\") " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.220129 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9dc0242e-0a62-4f1c-b978-00f6b2651429" (UID: "9dc0242e-0a62-4f1c-b978-00f6b2651429"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.221093 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9dc0242e-0a62-4f1c-b978-00f6b2651429" (UID: "9dc0242e-0a62-4f1c-b978-00f6b2651429"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.221416 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9dc0242e-0a62-4f1c-b978-00f6b2651429" (UID: "9dc0242e-0a62-4f1c-b978-00f6b2651429"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.221597 4914 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.221687 4914 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.222131 4914 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.222517 4914 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ead132f0-586e-402b-87bb-f7109396498d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.222643 4914 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.225103 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc0242e-0a62-4f1c-b978-00f6b2651429-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9dc0242e-0a62-4f1c-b978-00f6b2651429" (UID: "9dc0242e-0a62-4f1c-b978-00f6b2651429"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.225609 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9dc0242e-0a62-4f1c-b978-00f6b2651429-pod-info" (OuterVolumeSpecName: "pod-info") pod "9dc0242e-0a62-4f1c-b978-00f6b2651429" (UID: "9dc0242e-0a62-4f1c-b978-00f6b2651429"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.228162 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9dc0242e-0a62-4f1c-b978-00f6b2651429" (UID: "9dc0242e-0a62-4f1c-b978-00f6b2651429"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.229339 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-kube-api-access-lqrm8" (OuterVolumeSpecName: "kube-api-access-lqrm8") pod "9dc0242e-0a62-4f1c-b978-00f6b2651429" (UID: "9dc0242e-0a62-4f1c-b978-00f6b2651429"). InnerVolumeSpecName "kube-api-access-lqrm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.235140 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "9dc0242e-0a62-4f1c-b978-00f6b2651429" (UID: "9dc0242e-0a62-4f1c-b978-00f6b2651429"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.269351 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-config-data" (OuterVolumeSpecName: "config-data") pod "9dc0242e-0a62-4f1c-b978-00f6b2651429" (UID: "9dc0242e-0a62-4f1c-b978-00f6b2651429"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.283758 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-server-conf" (OuterVolumeSpecName: "server-conf") pod "9dc0242e-0a62-4f1c-b978-00f6b2651429" (UID: "9dc0242e-0a62-4f1c-b978-00f6b2651429"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.329658 4914 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.329696 4914 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9dc0242e-0a62-4f1c-b978-00f6b2651429-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.329705 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.329713 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqrm8\" (UniqueName: \"kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-kube-api-access-lqrm8\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.329724 4914 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9dc0242e-0a62-4f1c-b978-00f6b2651429-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.329732 4914 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9dc0242e-0a62-4f1c-b978-00f6b2651429-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.329739 4914 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.355406 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9dc0242e-0a62-4f1c-b978-00f6b2651429" (UID: "9dc0242e-0a62-4f1c-b978-00f6b2651429"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.359366 4914 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.376574 4914 generic.go:334] "Generic (PLEG): container finished" podID="9dc0242e-0a62-4f1c-b978-00f6b2651429" containerID="72a0e7ef894659f6e574eb6b278ddb50e7091ad93d45138a52a28b1bd5765080" exitCode=0 Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.376611 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.376635 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9dc0242e-0a62-4f1c-b978-00f6b2651429","Type":"ContainerDied","Data":"72a0e7ef894659f6e574eb6b278ddb50e7091ad93d45138a52a28b1bd5765080"} Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.376664 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9dc0242e-0a62-4f1c-b978-00f6b2651429","Type":"ContainerDied","Data":"31b1c3b1a1709616dbd7c6dad02dd0ec92db6c6158ac5ac4c69c452e06450637"} Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.376685 4914 scope.go:117] "RemoveContainer" containerID="72a0e7ef894659f6e574eb6b278ddb50e7091ad93d45138a52a28b1bd5765080" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.381637 4914 generic.go:334] "Generic (PLEG): container finished" podID="ead132f0-586e-402b-87bb-f7109396498d" containerID="bdbe6000f255755bfcd1719879a61fef64524a82bf179d78758b04a5d7b435b8" exitCode=0 Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.381682 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ead132f0-586e-402b-87bb-f7109396498d","Type":"ContainerDied","Data":"bdbe6000f255755bfcd1719879a61fef64524a82bf179d78758b04a5d7b435b8"} Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.381710 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ead132f0-586e-402b-87bb-f7109396498d","Type":"ContainerDied","Data":"0127ddfa307129ca7e976b76f78db663c2a0d5c0db9c6ac3bd4b97acf18d678f"} Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.381766 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.405371 4914 scope.go:117] "RemoveContainer" containerID="310f0d8ee69d8348287d42e688e3c34bcd22a0c014b71fa45e4908f7c3f9dc7b" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.413971 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.428493 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.430982 4914 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9dc0242e-0a62-4f1c-b978-00f6b2651429-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.431011 4914 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.439408 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.441528 4914 scope.go:117] "RemoveContainer" containerID="72a0e7ef894659f6e574eb6b278ddb50e7091ad93d45138a52a28b1bd5765080" Jan 27 14:09:04 crc kubenswrapper[4914]: E0127 14:09:04.442315 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72a0e7ef894659f6e574eb6b278ddb50e7091ad93d45138a52a28b1bd5765080\": container with ID starting with 72a0e7ef894659f6e574eb6b278ddb50e7091ad93d45138a52a28b1bd5765080 not found: ID does not exist" containerID="72a0e7ef894659f6e574eb6b278ddb50e7091ad93d45138a52a28b1bd5765080" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.442346 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a0e7ef894659f6e574eb6b278ddb50e7091ad93d45138a52a28b1bd5765080"} err="failed to get container status \"72a0e7ef894659f6e574eb6b278ddb50e7091ad93d45138a52a28b1bd5765080\": rpc error: code = NotFound desc = could not find container \"72a0e7ef894659f6e574eb6b278ddb50e7091ad93d45138a52a28b1bd5765080\": container with ID starting with 72a0e7ef894659f6e574eb6b278ddb50e7091ad93d45138a52a28b1bd5765080 not found: ID does not exist" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.442368 4914 scope.go:117] "RemoveContainer" containerID="310f0d8ee69d8348287d42e688e3c34bcd22a0c014b71fa45e4908f7c3f9dc7b" Jan 27 14:09:04 crc kubenswrapper[4914]: E0127 14:09:04.442603 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"310f0d8ee69d8348287d42e688e3c34bcd22a0c014b71fa45e4908f7c3f9dc7b\": container with ID starting with 310f0d8ee69d8348287d42e688e3c34bcd22a0c014b71fa45e4908f7c3f9dc7b not found: ID does not exist" containerID="310f0d8ee69d8348287d42e688e3c34bcd22a0c014b71fa45e4908f7c3f9dc7b" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.442700 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310f0d8ee69d8348287d42e688e3c34bcd22a0c014b71fa45e4908f7c3f9dc7b"} err="failed to get container status \"310f0d8ee69d8348287d42e688e3c34bcd22a0c014b71fa45e4908f7c3f9dc7b\": rpc error: code = NotFound desc = could not find container \"310f0d8ee69d8348287d42e688e3c34bcd22a0c014b71fa45e4908f7c3f9dc7b\": container with ID starting with 310f0d8ee69d8348287d42e688e3c34bcd22a0c014b71fa45e4908f7c3f9dc7b not found: ID does not exist" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.442785 4914 scope.go:117] "RemoveContainer" containerID="bdbe6000f255755bfcd1719879a61fef64524a82bf179d78758b04a5d7b435b8" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.452985 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.473889 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:09:04 crc kubenswrapper[4914]: E0127 14:09:04.474532 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead132f0-586e-402b-87bb-f7109396498d" containerName="rabbitmq" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.474710 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead132f0-586e-402b-87bb-f7109396498d" containerName="rabbitmq" Jan 27 14:09:04 crc kubenswrapper[4914]: E0127 14:09:04.474790 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead132f0-586e-402b-87bb-f7109396498d" containerName="setup-container" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.474868 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead132f0-586e-402b-87bb-f7109396498d" containerName="setup-container" Jan 27 14:09:04 crc kubenswrapper[4914]: E0127 14:09:04.474939 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc0242e-0a62-4f1c-b978-00f6b2651429" containerName="rabbitmq" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.474996 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc0242e-0a62-4f1c-b978-00f6b2651429" containerName="rabbitmq" Jan 27 14:09:04 crc kubenswrapper[4914]: E0127 14:09:04.475069 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc0242e-0a62-4f1c-b978-00f6b2651429" containerName="setup-container" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.475131 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc0242e-0a62-4f1c-b978-00f6b2651429" containerName="setup-container" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.475355 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc0242e-0a62-4f1c-b978-00f6b2651429" containerName="rabbitmq" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.475428 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="ead132f0-586e-402b-87bb-f7109396498d" containerName="rabbitmq" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.476437 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.478595 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.478750 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.479015 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.479272 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.479405 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.479567 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7n2f5" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.479744 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.485968 4914 scope.go:117] "RemoveContainer" containerID="60868d2a16744a7d8e12849ab0667e58062dc138a6494289bb33b1915cc1001f" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.492678 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.494373 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.495808 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.498900 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-j27zd" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.499227 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.499720 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.499994 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.500020 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.500176 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.508598 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.518968 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.536573 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa420d84-09ad-44c4-9af0-fddfcab7501c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.536622 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa420d84-09ad-44c4-9af0-fddfcab7501c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.536674 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3655c22-46a7-4ed5-bba1-4a294940777d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.536705 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3655c22-46a7-4ed5-bba1-4a294940777d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.536732 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa420d84-09ad-44c4-9af0-fddfcab7501c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.536755 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgs8n\" (UniqueName: \"kubernetes.io/projected/c3655c22-46a7-4ed5-bba1-4a294940777d-kube-api-access-xgs8n\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.536787 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3655c22-46a7-4ed5-bba1-4a294940777d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.536822 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3655c22-46a7-4ed5-bba1-4a294940777d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.536867 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3655c22-46a7-4ed5-bba1-4a294940777d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.536936 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.536960 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa420d84-09ad-44c4-9af0-fddfcab7501c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.536983 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.537055 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa420d84-09ad-44c4-9af0-fddfcab7501c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.537077 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3655c22-46a7-4ed5-bba1-4a294940777d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.537109 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3655c22-46a7-4ed5-bba1-4a294940777d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.537137 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa420d84-09ad-44c4-9af0-fddfcab7501c-config-data\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.537160 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3655c22-46a7-4ed5-bba1-4a294940777d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.537187 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa420d84-09ad-44c4-9af0-fddfcab7501c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.537220 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fzgt\" (UniqueName: \"kubernetes.io/projected/fa420d84-09ad-44c4-9af0-fddfcab7501c-kube-api-access-6fzgt\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.537244 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa420d84-09ad-44c4-9af0-fddfcab7501c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.537276 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa420d84-09ad-44c4-9af0-fddfcab7501c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.537297 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3655c22-46a7-4ed5-bba1-4a294940777d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.639532 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa420d84-09ad-44c4-9af0-fddfcab7501c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.639607 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa420d84-09ad-44c4-9af0-fddfcab7501c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.639680 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3655c22-46a7-4ed5-bba1-4a294940777d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.639714 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3655c22-46a7-4ed5-bba1-4a294940777d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.639772 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa420d84-09ad-44c4-9af0-fddfcab7501c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.639801 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgs8n\" (UniqueName: \"kubernetes.io/projected/c3655c22-46a7-4ed5-bba1-4a294940777d-kube-api-access-xgs8n\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.639871 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3655c22-46a7-4ed5-bba1-4a294940777d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.639945 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3655c22-46a7-4ed5-bba1-4a294940777d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.639982 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3655c22-46a7-4ed5-bba1-4a294940777d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.640052 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.640100 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa420d84-09ad-44c4-9af0-fddfcab7501c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.640132 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.640225 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa420d84-09ad-44c4-9af0-fddfcab7501c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.640276 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3655c22-46a7-4ed5-bba1-4a294940777d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.640319 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3655c22-46a7-4ed5-bba1-4a294940777d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.640473 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa420d84-09ad-44c4-9af0-fddfcab7501c-config-data\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.640548 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3655c22-46a7-4ed5-bba1-4a294940777d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.640610 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa420d84-09ad-44c4-9af0-fddfcab7501c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.640656 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fzgt\" (UniqueName: \"kubernetes.io/projected/fa420d84-09ad-44c4-9af0-fddfcab7501c-kube-api-access-6fzgt\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.640712 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa420d84-09ad-44c4-9af0-fddfcab7501c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.640749 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa420d84-09ad-44c4-9af0-fddfcab7501c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.640802 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3655c22-46a7-4ed5-bba1-4a294940777d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.641004 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.642974 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa420d84-09ad-44c4-9af0-fddfcab7501c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.643809 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa420d84-09ad-44c4-9af0-fddfcab7501c-config-data\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.643930 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.644351 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa420d84-09ad-44c4-9af0-fddfcab7501c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.644594 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa420d84-09ad-44c4-9af0-fddfcab7501c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.644954 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3655c22-46a7-4ed5-bba1-4a294940777d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.645759 4914 scope.go:117] "RemoveContainer" containerID="bdbe6000f255755bfcd1719879a61fef64524a82bf179d78758b04a5d7b435b8" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.646526 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3655c22-46a7-4ed5-bba1-4a294940777d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: E0127 14:09:04.646658 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdbe6000f255755bfcd1719879a61fef64524a82bf179d78758b04a5d7b435b8\": container with ID starting with bdbe6000f255755bfcd1719879a61fef64524a82bf179d78758b04a5d7b435b8 not found: ID does not exist" containerID="bdbe6000f255755bfcd1719879a61fef64524a82bf179d78758b04a5d7b435b8" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.646693 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdbe6000f255755bfcd1719879a61fef64524a82bf179d78758b04a5d7b435b8"} err="failed to get container status \"bdbe6000f255755bfcd1719879a61fef64524a82bf179d78758b04a5d7b435b8\": rpc error: code = NotFound desc = could not find container \"bdbe6000f255755bfcd1719879a61fef64524a82bf179d78758b04a5d7b435b8\": container with ID starting with bdbe6000f255755bfcd1719879a61fef64524a82bf179d78758b04a5d7b435b8 not found: ID does not exist" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.646724 4914 scope.go:117] "RemoveContainer" containerID="60868d2a16744a7d8e12849ab0667e58062dc138a6494289bb33b1915cc1001f" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.646924 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3655c22-46a7-4ed5-bba1-4a294940777d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: E0127 14:09:04.647804 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60868d2a16744a7d8e12849ab0667e58062dc138a6494289bb33b1915cc1001f\": container with ID starting with 60868d2a16744a7d8e12849ab0667e58062dc138a6494289bb33b1915cc1001f not found: ID does not exist" containerID="60868d2a16744a7d8e12849ab0667e58062dc138a6494289bb33b1915cc1001f" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.647852 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60868d2a16744a7d8e12849ab0667e58062dc138a6494289bb33b1915cc1001f"} err="failed to get container status \"60868d2a16744a7d8e12849ab0667e58062dc138a6494289bb33b1915cc1001f\": rpc error: code = NotFound desc = could not find container \"60868d2a16744a7d8e12849ab0667e58062dc138a6494289bb33b1915cc1001f\": container with ID starting with 60868d2a16744a7d8e12849ab0667e58062dc138a6494289bb33b1915cc1001f not found: ID does not exist" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.649912 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa420d84-09ad-44c4-9af0-fddfcab7501c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.651401 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3655c22-46a7-4ed5-bba1-4a294940777d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.697366 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3655c22-46a7-4ed5-bba1-4a294940777d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.697702 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3655c22-46a7-4ed5-bba1-4a294940777d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.707234 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3655c22-46a7-4ed5-bba1-4a294940777d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.707936 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa420d84-09ad-44c4-9af0-fddfcab7501c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.708323 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa420d84-09ad-44c4-9af0-fddfcab7501c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.709664 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa420d84-09ad-44c4-9af0-fddfcab7501c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.712774 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fzgt\" (UniqueName: \"kubernetes.io/projected/fa420d84-09ad-44c4-9af0-fddfcab7501c-kube-api-access-6fzgt\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.733274 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa420d84-09ad-44c4-9af0-fddfcab7501c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.777538 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3655c22-46a7-4ed5-bba1-4a294940777d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.778386 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3655c22-46a7-4ed5-bba1-4a294940777d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.779625 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgs8n\" (UniqueName: \"kubernetes.io/projected/c3655c22-46a7-4ed5-bba1-4a294940777d-kube-api-access-xgs8n\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.794587 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3655c22-46a7-4ed5-bba1-4a294940777d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.838106 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"fa420d84-09ad-44c4-9af0-fddfcab7501c\") " pod="openstack/rabbitmq-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.855916 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:04 crc kubenswrapper[4914]: I0127 14:09:04.867965 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:09:05 crc kubenswrapper[4914]: I0127 14:09:05.390861 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:09:05 crc kubenswrapper[4914]: I0127 14:09:05.404211 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:09:05 crc kubenswrapper[4914]: I0127 14:09:05.413123 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c3655c22-46a7-4ed5-bba1-4a294940777d","Type":"ContainerStarted","Data":"73485972de793a9ac9e070e42055b90767026370ea5b6728bb402fabaef06969"} Jan 27 14:09:06 crc kubenswrapper[4914]: I0127 14:09:06.305457 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dc0242e-0a62-4f1c-b978-00f6b2651429" path="/var/lib/kubelet/pods/9dc0242e-0a62-4f1c-b978-00f6b2651429/volumes" Jan 27 14:09:06 crc kubenswrapper[4914]: I0127 14:09:06.306733 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ead132f0-586e-402b-87bb-f7109396498d" path="/var/lib/kubelet/pods/ead132f0-586e-402b-87bb-f7109396498d/volumes" Jan 27 14:09:06 crc kubenswrapper[4914]: I0127 14:09:06.422983 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fa420d84-09ad-44c4-9af0-fddfcab7501c","Type":"ContainerStarted","Data":"ba0e5051162182293e0916e08ef204c63917159fa4dd682a6d4da62daf098df6"} Jan 27 14:09:07 crc kubenswrapper[4914]: I0127 14:09:07.441167 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fa420d84-09ad-44c4-9af0-fddfcab7501c","Type":"ContainerStarted","Data":"ae6f4932ee1a553cf9b79083e449649b01c1267647a8cf242d75d43e438917ed"} Jan 27 14:09:07 crc kubenswrapper[4914]: I0127 14:09:07.446623 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c3655c22-46a7-4ed5-bba1-4a294940777d","Type":"ContainerStarted","Data":"ca80b4dcadea311d8cfe5c094675b5316d43b41dfaa72e99ca4fd7bd79224c98"} Jan 27 14:09:07 crc kubenswrapper[4914]: I0127 14:09:07.691231 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:09:07 crc kubenswrapper[4914]: I0127 14:09:07.691306 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:09:08 crc kubenswrapper[4914]: I0127 14:09:08.605822 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 14:09:08 crc kubenswrapper[4914]: I0127 14:09:08.606204 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 14:09:08 crc kubenswrapper[4914]: I0127 14:09:08.612501 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 14:09:08 crc kubenswrapper[4914]: I0127 14:09:08.614352 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 14:09:08 crc kubenswrapper[4914]: I0127 14:09:08.895043 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-wtzqw"] Jan 27 14:09:08 crc kubenswrapper[4914]: I0127 14:09:08.896627 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:08 crc kubenswrapper[4914]: I0127 14:09:08.902506 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 27 14:09:08 crc kubenswrapper[4914]: I0127 14:09:08.913678 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-wtzqw"] Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.043540 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-config\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.043589 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-ovsdbserver-sb\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.043627 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-dns-svc\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.043654 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5smkr\" (UniqueName: \"kubernetes.io/projected/12a4cf14-22d7-4495-9a6d-98138807d10e-kube-api-access-5smkr\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.043886 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-openstack-edpm-ipam\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.044192 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-ovsdbserver-nb\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.044294 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-dns-swift-storage-0\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.146095 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-dns-swift-storage-0\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.146164 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-config\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.146194 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-ovsdbserver-sb\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.146242 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-dns-svc\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.146280 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5smkr\" (UniqueName: \"kubernetes.io/projected/12a4cf14-22d7-4495-9a6d-98138807d10e-kube-api-access-5smkr\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.146374 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-openstack-edpm-ipam\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.146480 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-ovsdbserver-nb\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.147209 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-dns-swift-storage-0\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.147342 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-config\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.147413 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-ovsdbserver-nb\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.147519 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-ovsdbserver-sb\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.148097 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-openstack-edpm-ipam\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.148107 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-dns-svc\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.172218 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5smkr\" (UniqueName: \"kubernetes.io/projected/12a4cf14-22d7-4495-9a6d-98138807d10e-kube-api-access-5smkr\") pod \"dnsmasq-dns-668b55cdd7-wtzqw\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.217310 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.685174 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-wtzqw"] Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.815603 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.816396 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.821095 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:09:09 crc kubenswrapper[4914]: I0127 14:09:09.823559 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:09:10 crc kubenswrapper[4914]: I0127 14:09:10.485782 4914 generic.go:334] "Generic (PLEG): container finished" podID="12a4cf14-22d7-4495-9a6d-98138807d10e" containerID="aff2b6eccb54222113aba572495f1de3864101a975836e5f6d7aa49cc33f5340" exitCode=0 Jan 27 14:09:10 crc kubenswrapper[4914]: I0127 14:09:10.485823 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" event={"ID":"12a4cf14-22d7-4495-9a6d-98138807d10e","Type":"ContainerDied","Data":"aff2b6eccb54222113aba572495f1de3864101a975836e5f6d7aa49cc33f5340"} Jan 27 14:09:10 crc kubenswrapper[4914]: I0127 14:09:10.486138 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" event={"ID":"12a4cf14-22d7-4495-9a6d-98138807d10e","Type":"ContainerStarted","Data":"6235b8a8a46b6f2297255378d4fa80b628bbc1277e875f524fe02f4e319b2ae0"} Jan 27 14:09:10 crc kubenswrapper[4914]: I0127 14:09:10.486644 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:09:10 crc kubenswrapper[4914]: I0127 14:09:10.493775 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:09:11 crc kubenswrapper[4914]: I0127 14:09:11.508066 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" event={"ID":"12a4cf14-22d7-4495-9a6d-98138807d10e","Type":"ContainerStarted","Data":"5ec52d085c5740cb38d7873df1d836dee25db06826a98b85618034fb5c975fa3"} Jan 27 14:09:11 crc kubenswrapper[4914]: I0127 14:09:11.508700 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:11 crc kubenswrapper[4914]: I0127 14:09:11.535927 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" podStartSLOduration=3.535905148 podStartE2EDuration="3.535905148s" podCreationTimestamp="2026-01-27 14:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:09:11.530981912 +0000 UTC m=+1509.843331997" watchObservedRunningTime="2026-01-27 14:09:11.535905148 +0000 UTC m=+1509.848255233" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.219019 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.338013 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-v7nlp"] Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.338243 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" podUID="ed8eb59d-eb98-46a0-808b-678b4bd1d5ef" containerName="dnsmasq-dns" containerID="cri-o://033ba27cb7821e0546451327e09e9cb96e72a49a52be0876d9ff5d3c1933e93c" gracePeriod=10 Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.589033 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66fc59ccbf-9wmtl"] Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.590775 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.592859 4914 generic.go:334] "Generic (PLEG): container finished" podID="ed8eb59d-eb98-46a0-808b-678b4bd1d5ef" containerID="033ba27cb7821e0546451327e09e9cb96e72a49a52be0876d9ff5d3c1933e93c" exitCode=0 Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.592894 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" event={"ID":"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef","Type":"ContainerDied","Data":"033ba27cb7821e0546451327e09e9cb96e72a49a52be0876d9ff5d3c1933e93c"} Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.659827 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66fc59ccbf-9wmtl"] Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.662446 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-dns-swift-storage-0\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.662497 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-dns-svc\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.662523 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdpp7\" (UniqueName: \"kubernetes.io/projected/c054fe54-b82e-4f46-9f54-29de25ea1583-kube-api-access-rdpp7\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.662542 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-config\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.662602 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-ovsdbserver-sb\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.662630 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-ovsdbserver-nb\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.662656 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-openstack-edpm-ipam\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.764083 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdpp7\" (UniqueName: \"kubernetes.io/projected/c054fe54-b82e-4f46-9f54-29de25ea1583-kube-api-access-rdpp7\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.764435 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-config\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.764527 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-ovsdbserver-sb\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.764572 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-ovsdbserver-nb\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.764603 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-openstack-edpm-ipam\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.764665 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-dns-swift-storage-0\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.764688 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-dns-svc\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.765806 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-config\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.765860 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-ovsdbserver-nb\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.765997 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-dns-svc\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.766715 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-dns-swift-storage-0\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.766725 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-openstack-edpm-ipam\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.767294 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c054fe54-b82e-4f46-9f54-29de25ea1583-ovsdbserver-sb\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.788889 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdpp7\" (UniqueName: \"kubernetes.io/projected/c054fe54-b82e-4f46-9f54-29de25ea1583-kube-api-access-rdpp7\") pod \"dnsmasq-dns-66fc59ccbf-9wmtl\" (UID: \"c054fe54-b82e-4f46-9f54-29de25ea1583\") " pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:19 crc kubenswrapper[4914]: I0127 14:09:19.973100 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.000044 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.069512 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-ovsdbserver-nb\") pod \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.069575 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-dns-swift-storage-0\") pod \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.069622 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-config\") pod \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.069687 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-dns-svc\") pod \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.069733 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-ovsdbserver-sb\") pod \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.069760 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpb8c\" (UniqueName: \"kubernetes.io/projected/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-kube-api-access-fpb8c\") pod \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\" (UID: \"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef\") " Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.074972 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-kube-api-access-fpb8c" (OuterVolumeSpecName: "kube-api-access-fpb8c") pod "ed8eb59d-eb98-46a0-808b-678b4bd1d5ef" (UID: "ed8eb59d-eb98-46a0-808b-678b4bd1d5ef"). InnerVolumeSpecName "kube-api-access-fpb8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.141167 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ed8eb59d-eb98-46a0-808b-678b4bd1d5ef" (UID: "ed8eb59d-eb98-46a0-808b-678b4bd1d5ef"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.142226 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-config" (OuterVolumeSpecName: "config") pod "ed8eb59d-eb98-46a0-808b-678b4bd1d5ef" (UID: "ed8eb59d-eb98-46a0-808b-678b4bd1d5ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.153987 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed8eb59d-eb98-46a0-808b-678b4bd1d5ef" (UID: "ed8eb59d-eb98-46a0-808b-678b4bd1d5ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.158248 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ed8eb59d-eb98-46a0-808b-678b4bd1d5ef" (UID: "ed8eb59d-eb98-46a0-808b-678b4bd1d5ef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.158662 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ed8eb59d-eb98-46a0-808b-678b4bd1d5ef" (UID: "ed8eb59d-eb98-46a0-808b-678b4bd1d5ef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.171142 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.171175 4914 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.171187 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.171195 4914 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.171203 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.171212 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpb8c\" (UniqueName: \"kubernetes.io/projected/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef-kube-api-access-fpb8c\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.471333 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66fc59ccbf-9wmtl"] Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.608354 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" event={"ID":"ed8eb59d-eb98-46a0-808b-678b4bd1d5ef","Type":"ContainerDied","Data":"3c170324e9d459871f9be7ecbf915e818c66ed55fc91d97319f16bc4eaf3f788"} Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.608773 4914 scope.go:117] "RemoveContainer" containerID="033ba27cb7821e0546451327e09e9cb96e72a49a52be0876d9ff5d3c1933e93c" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.608386 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-v7nlp" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.632357 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" event={"ID":"c054fe54-b82e-4f46-9f54-29de25ea1583","Type":"ContainerStarted","Data":"e27ec9cfb3b48d8fdcc692cd863eb0c40b66ae79daaa870cc6654c3215db7726"} Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.661149 4914 scope.go:117] "RemoveContainer" containerID="f6a4b339a800946a03774f043c586362ac2b5eef075716cad7fc383bc92c867a" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.681336 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-v7nlp"] Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.695470 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zk259"] Jan 27 14:09:20 crc kubenswrapper[4914]: E0127 14:09:20.696065 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8eb59d-eb98-46a0-808b-678b4bd1d5ef" containerName="init" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.696083 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8eb59d-eb98-46a0-808b-678b4bd1d5ef" containerName="init" Jan 27 14:09:20 crc kubenswrapper[4914]: E0127 14:09:20.696131 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8eb59d-eb98-46a0-808b-678b4bd1d5ef" containerName="dnsmasq-dns" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.696140 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8eb59d-eb98-46a0-808b-678b4bd1d5ef" containerName="dnsmasq-dns" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.696394 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed8eb59d-eb98-46a0-808b-678b4bd1d5ef" containerName="dnsmasq-dns" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.698146 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.710411 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-v7nlp"] Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.724870 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zk259"] Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.782958 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xls65\" (UniqueName: \"kubernetes.io/projected/7da5aca4-c382-4626-a94d-932658438b32-kube-api-access-xls65\") pod \"redhat-operators-zk259\" (UID: \"7da5aca4-c382-4626-a94d-932658438b32\") " pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.783059 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7da5aca4-c382-4626-a94d-932658438b32-utilities\") pod \"redhat-operators-zk259\" (UID: \"7da5aca4-c382-4626-a94d-932658438b32\") " pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.783304 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7da5aca4-c382-4626-a94d-932658438b32-catalog-content\") pod \"redhat-operators-zk259\" (UID: \"7da5aca4-c382-4626-a94d-932658438b32\") " pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.885272 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7da5aca4-c382-4626-a94d-932658438b32-catalog-content\") pod \"redhat-operators-zk259\" (UID: \"7da5aca4-c382-4626-a94d-932658438b32\") " pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.885442 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xls65\" (UniqueName: \"kubernetes.io/projected/7da5aca4-c382-4626-a94d-932658438b32-kube-api-access-xls65\") pod \"redhat-operators-zk259\" (UID: \"7da5aca4-c382-4626-a94d-932658438b32\") " pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.885478 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7da5aca4-c382-4626-a94d-932658438b32-utilities\") pod \"redhat-operators-zk259\" (UID: \"7da5aca4-c382-4626-a94d-932658438b32\") " pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.885936 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7da5aca4-c382-4626-a94d-932658438b32-catalog-content\") pod \"redhat-operators-zk259\" (UID: \"7da5aca4-c382-4626-a94d-932658438b32\") " pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.885993 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7da5aca4-c382-4626-a94d-932658438b32-utilities\") pod \"redhat-operators-zk259\" (UID: \"7da5aca4-c382-4626-a94d-932658438b32\") " pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:09:20 crc kubenswrapper[4914]: I0127 14:09:20.913968 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xls65\" (UniqueName: \"kubernetes.io/projected/7da5aca4-c382-4626-a94d-932658438b32-kube-api-access-xls65\") pod \"redhat-operators-zk259\" (UID: \"7da5aca4-c382-4626-a94d-932658438b32\") " pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:09:21 crc kubenswrapper[4914]: I0127 14:09:21.065440 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:09:21 crc kubenswrapper[4914]: I0127 14:09:21.512112 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zk259"] Jan 27 14:09:21 crc kubenswrapper[4914]: W0127 14:09:21.518099 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7da5aca4_c382_4626_a94d_932658438b32.slice/crio-5ae5c2fc1987a21e0b2bac2a6eb7edc70dff6b4348019863d089d016ba68f3a5 WatchSource:0}: Error finding container 5ae5c2fc1987a21e0b2bac2a6eb7edc70dff6b4348019863d089d016ba68f3a5: Status 404 returned error can't find the container with id 5ae5c2fc1987a21e0b2bac2a6eb7edc70dff6b4348019863d089d016ba68f3a5 Jan 27 14:09:21 crc kubenswrapper[4914]: I0127 14:09:21.658278 4914 generic.go:334] "Generic (PLEG): container finished" podID="c054fe54-b82e-4f46-9f54-29de25ea1583" containerID="b7f0fe3c2d74b86a87ea978c47267b20120f0cc7eb677c98420692a0ec26df17" exitCode=0 Jan 27 14:09:21 crc kubenswrapper[4914]: I0127 14:09:21.658362 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" event={"ID":"c054fe54-b82e-4f46-9f54-29de25ea1583","Type":"ContainerDied","Data":"b7f0fe3c2d74b86a87ea978c47267b20120f0cc7eb677c98420692a0ec26df17"} Jan 27 14:09:21 crc kubenswrapper[4914]: I0127 14:09:21.672169 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk259" event={"ID":"7da5aca4-c382-4626-a94d-932658438b32","Type":"ContainerStarted","Data":"5ae5c2fc1987a21e0b2bac2a6eb7edc70dff6b4348019863d089d016ba68f3a5"} Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.303374 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed8eb59d-eb98-46a0-808b-678b4bd1d5ef" path="/var/lib/kubelet/pods/ed8eb59d-eb98-46a0-808b-678b4bd1d5ef/volumes" Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.370381 4914 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pode452ff2e-dbbd-484d-80b0-45883aa5fca3"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pode452ff2e-dbbd-484d-80b0-45883aa5fca3] : Timed out while waiting for systemd to remove kubepods-besteffort-pode452ff2e_dbbd_484d_80b0_45883aa5fca3.slice" Jan 27 14:09:22 crc kubenswrapper[4914]: E0127 14:09:22.370433 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pode452ff2e-dbbd-484d-80b0-45883aa5fca3] : unable to destroy cgroup paths for cgroup [kubepods besteffort pode452ff2e-dbbd-484d-80b0-45883aa5fca3] : Timed out while waiting for systemd to remove kubepods-besteffort-pode452ff2e_dbbd_484d_80b0_45883aa5fca3.slice" pod="openstack/nova-cell0-conductor-0" podUID="e452ff2e-dbbd-484d-80b0-45883aa5fca3" Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.683494 4914 generic.go:334] "Generic (PLEG): container finished" podID="7da5aca4-c382-4626-a94d-932658438b32" containerID="d812598ac84019ec77443ce2213fda62e48ea41ed999ce70d2e6e5fc009c1cc4" exitCode=0 Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.683576 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk259" event={"ID":"7da5aca4-c382-4626-a94d-932658438b32","Type":"ContainerDied","Data":"d812598ac84019ec77443ce2213fda62e48ea41ed999ce70d2e6e5fc009c1cc4"} Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.686010 4914 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.687313 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.687333 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" event={"ID":"c054fe54-b82e-4f46-9f54-29de25ea1583","Type":"ContainerStarted","Data":"606e47fa943c33013c1d1bf6debb2c1ad5eb394e4ab38f0b7706409d30b43611"} Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.737869 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.748202 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.766154 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.767274 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.770132 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.792057 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.816668 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" podStartSLOduration=3.816644625 podStartE2EDuration="3.816644625s" podCreationTimestamp="2026-01-27 14:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:09:22.810221338 +0000 UTC m=+1521.122571423" watchObservedRunningTime="2026-01-27 14:09:22.816644625 +0000 UTC m=+1521.128994710" Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.827291 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a7f2bb6-0714-482b-91e6-50fea1ab85e2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2a7f2bb6-0714-482b-91e6-50fea1ab85e2\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.827362 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a7f2bb6-0714-482b-91e6-50fea1ab85e2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2a7f2bb6-0714-482b-91e6-50fea1ab85e2\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.827461 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg9kr\" (UniqueName: \"kubernetes.io/projected/2a7f2bb6-0714-482b-91e6-50fea1ab85e2-kube-api-access-gg9kr\") pod \"nova-cell0-conductor-0\" (UID: \"2a7f2bb6-0714-482b-91e6-50fea1ab85e2\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.928744 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg9kr\" (UniqueName: \"kubernetes.io/projected/2a7f2bb6-0714-482b-91e6-50fea1ab85e2-kube-api-access-gg9kr\") pod \"nova-cell0-conductor-0\" (UID: \"2a7f2bb6-0714-482b-91e6-50fea1ab85e2\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.929134 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a7f2bb6-0714-482b-91e6-50fea1ab85e2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2a7f2bb6-0714-482b-91e6-50fea1ab85e2\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.929173 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a7f2bb6-0714-482b-91e6-50fea1ab85e2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2a7f2bb6-0714-482b-91e6-50fea1ab85e2\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.936437 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a7f2bb6-0714-482b-91e6-50fea1ab85e2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2a7f2bb6-0714-482b-91e6-50fea1ab85e2\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.942842 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a7f2bb6-0714-482b-91e6-50fea1ab85e2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2a7f2bb6-0714-482b-91e6-50fea1ab85e2\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:09:22 crc kubenswrapper[4914]: I0127 14:09:22.947743 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg9kr\" (UniqueName: \"kubernetes.io/projected/2a7f2bb6-0714-482b-91e6-50fea1ab85e2-kube-api-access-gg9kr\") pod \"nova-cell0-conductor-0\" (UID: \"2a7f2bb6-0714-482b-91e6-50fea1ab85e2\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:09:23 crc kubenswrapper[4914]: I0127 14:09:23.086411 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 14:09:23 crc kubenswrapper[4914]: I0127 14:09:23.530992 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:09:23 crc kubenswrapper[4914]: W0127 14:09:23.533700 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a7f2bb6_0714_482b_91e6_50fea1ab85e2.slice/crio-7e7ff4e16b96f385397a78cf919657dcaa8d7b2e493bd8a8d29c7b86e18c7b34 WatchSource:0}: Error finding container 7e7ff4e16b96f385397a78cf919657dcaa8d7b2e493bd8a8d29c7b86e18c7b34: Status 404 returned error can't find the container with id 7e7ff4e16b96f385397a78cf919657dcaa8d7b2e493bd8a8d29c7b86e18c7b34 Jan 27 14:09:23 crc kubenswrapper[4914]: I0127 14:09:23.698346 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2a7f2bb6-0714-482b-91e6-50fea1ab85e2","Type":"ContainerStarted","Data":"7e7ff4e16b96f385397a78cf919657dcaa8d7b2e493bd8a8d29c7b86e18c7b34"} Jan 27 14:09:23 crc kubenswrapper[4914]: I0127 14:09:23.699544 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:24 crc kubenswrapper[4914]: I0127 14:09:24.305212 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e452ff2e-dbbd-484d-80b0-45883aa5fca3" path="/var/lib/kubelet/pods/e452ff2e-dbbd-484d-80b0-45883aa5fca3/volumes" Jan 27 14:09:24 crc kubenswrapper[4914]: I0127 14:09:24.713569 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk259" event={"ID":"7da5aca4-c382-4626-a94d-932658438b32","Type":"ContainerStarted","Data":"3a408b47638a82220c4c062e0580c68d452d6ed2825d817319933562f0367200"} Jan 27 14:09:24 crc kubenswrapper[4914]: I0127 14:09:24.715952 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2a7f2bb6-0714-482b-91e6-50fea1ab85e2","Type":"ContainerStarted","Data":"813a6663b18b39b8d2f8c025771ef77b973367604d9dc255add2f35ba5a915e6"} Jan 27 14:09:24 crc kubenswrapper[4914]: I0127 14:09:24.716092 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 14:09:24 crc kubenswrapper[4914]: I0127 14:09:24.766374 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.766342407 podStartE2EDuration="2.766342407s" podCreationTimestamp="2026-01-27 14:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:09:24.756099144 +0000 UTC m=+1523.068449229" watchObservedRunningTime="2026-01-27 14:09:24.766342407 +0000 UTC m=+1523.078692502" Jan 27 14:09:25 crc kubenswrapper[4914]: I0127 14:09:25.726041 4914 generic.go:334] "Generic (PLEG): container finished" podID="7da5aca4-c382-4626-a94d-932658438b32" containerID="3a408b47638a82220c4c062e0580c68d452d6ed2825d817319933562f0367200" exitCode=0 Jan 27 14:09:25 crc kubenswrapper[4914]: I0127 14:09:25.726106 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk259" event={"ID":"7da5aca4-c382-4626-a94d-932658438b32","Type":"ContainerDied","Data":"3a408b47638a82220c4c062e0580c68d452d6ed2825d817319933562f0367200"} Jan 27 14:09:27 crc kubenswrapper[4914]: I0127 14:09:27.745093 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk259" event={"ID":"7da5aca4-c382-4626-a94d-932658438b32","Type":"ContainerStarted","Data":"371e48382344a148c0b46d0ecaf7b8309a37e9aefd70730ac7b3913a16d3693f"} Jan 27 14:09:27 crc kubenswrapper[4914]: I0127 14:09:27.767903 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zk259" podStartSLOduration=3.761998254 podStartE2EDuration="7.767879409s" podCreationTimestamp="2026-01-27 14:09:20 +0000 UTC" firstStartedPulling="2026-01-27 14:09:22.685693289 +0000 UTC m=+1520.998043374" lastFinishedPulling="2026-01-27 14:09:26.691574444 +0000 UTC m=+1525.003924529" observedRunningTime="2026-01-27 14:09:27.763000554 +0000 UTC m=+1526.075350639" watchObservedRunningTime="2026-01-27 14:09:27.767879409 +0000 UTC m=+1526.080229494" Jan 27 14:09:28 crc kubenswrapper[4914]: I0127 14:09:28.112275 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 14:09:29 crc kubenswrapper[4914]: I0127 14:09:29.976105 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66fc59ccbf-9wmtl" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.056782 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-wtzqw"] Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.057442 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" podUID="12a4cf14-22d7-4495-9a6d-98138807d10e" containerName="dnsmasq-dns" containerID="cri-o://5ec52d085c5740cb38d7873df1d836dee25db06826a98b85618034fb5c975fa3" gracePeriod=10 Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.550061 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.698865 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-openstack-edpm-ipam\") pod \"12a4cf14-22d7-4495-9a6d-98138807d10e\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.698918 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-dns-swift-storage-0\") pod \"12a4cf14-22d7-4495-9a6d-98138807d10e\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.698965 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-ovsdbserver-sb\") pod \"12a4cf14-22d7-4495-9a6d-98138807d10e\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.699104 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5smkr\" (UniqueName: \"kubernetes.io/projected/12a4cf14-22d7-4495-9a6d-98138807d10e-kube-api-access-5smkr\") pod \"12a4cf14-22d7-4495-9a6d-98138807d10e\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.699170 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-dns-svc\") pod \"12a4cf14-22d7-4495-9a6d-98138807d10e\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.699205 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-ovsdbserver-nb\") pod \"12a4cf14-22d7-4495-9a6d-98138807d10e\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.699231 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-config\") pod \"12a4cf14-22d7-4495-9a6d-98138807d10e\" (UID: \"12a4cf14-22d7-4495-9a6d-98138807d10e\") " Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.705138 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a4cf14-22d7-4495-9a6d-98138807d10e-kube-api-access-5smkr" (OuterVolumeSpecName: "kube-api-access-5smkr") pod "12a4cf14-22d7-4495-9a6d-98138807d10e" (UID: "12a4cf14-22d7-4495-9a6d-98138807d10e"). InnerVolumeSpecName "kube-api-access-5smkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.751619 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "12a4cf14-22d7-4495-9a6d-98138807d10e" (UID: "12a4cf14-22d7-4495-9a6d-98138807d10e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.755655 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-config" (OuterVolumeSpecName: "config") pod "12a4cf14-22d7-4495-9a6d-98138807d10e" (UID: "12a4cf14-22d7-4495-9a6d-98138807d10e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.756675 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "12a4cf14-22d7-4495-9a6d-98138807d10e" (UID: "12a4cf14-22d7-4495-9a6d-98138807d10e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.756753 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "12a4cf14-22d7-4495-9a6d-98138807d10e" (UID: "12a4cf14-22d7-4495-9a6d-98138807d10e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.762736 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "12a4cf14-22d7-4495-9a6d-98138807d10e" (UID: "12a4cf14-22d7-4495-9a6d-98138807d10e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.781776 4914 generic.go:334] "Generic (PLEG): container finished" podID="12a4cf14-22d7-4495-9a6d-98138807d10e" containerID="5ec52d085c5740cb38d7873df1d836dee25db06826a98b85618034fb5c975fa3" exitCode=0 Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.781844 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" event={"ID":"12a4cf14-22d7-4495-9a6d-98138807d10e","Type":"ContainerDied","Data":"5ec52d085c5740cb38d7873df1d836dee25db06826a98b85618034fb5c975fa3"} Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.781874 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" event={"ID":"12a4cf14-22d7-4495-9a6d-98138807d10e","Type":"ContainerDied","Data":"6235b8a8a46b6f2297255378d4fa80b628bbc1277e875f524fe02f4e319b2ae0"} Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.781893 4914 scope.go:117] "RemoveContainer" containerID="5ec52d085c5740cb38d7873df1d836dee25db06826a98b85618034fb5c975fa3" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.781845 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668b55cdd7-wtzqw" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.790431 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "12a4cf14-22d7-4495-9a6d-98138807d10e" (UID: "12a4cf14-22d7-4495-9a6d-98138807d10e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.801203 4914 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.801236 4914 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.801249 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.801259 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5smkr\" (UniqueName: \"kubernetes.io/projected/12a4cf14-22d7-4495-9a6d-98138807d10e-kube-api-access-5smkr\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.801271 4914 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.801279 4914 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.801288 4914 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a4cf14-22d7-4495-9a6d-98138807d10e-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.830025 4914 scope.go:117] "RemoveContainer" containerID="aff2b6eccb54222113aba572495f1de3864101a975836e5f6d7aa49cc33f5340" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.853875 4914 scope.go:117] "RemoveContainer" containerID="5ec52d085c5740cb38d7873df1d836dee25db06826a98b85618034fb5c975fa3" Jan 27 14:09:30 crc kubenswrapper[4914]: E0127 14:09:30.854409 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ec52d085c5740cb38d7873df1d836dee25db06826a98b85618034fb5c975fa3\": container with ID starting with 5ec52d085c5740cb38d7873df1d836dee25db06826a98b85618034fb5c975fa3 not found: ID does not exist" containerID="5ec52d085c5740cb38d7873df1d836dee25db06826a98b85618034fb5c975fa3" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.854463 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ec52d085c5740cb38d7873df1d836dee25db06826a98b85618034fb5c975fa3"} err="failed to get container status \"5ec52d085c5740cb38d7873df1d836dee25db06826a98b85618034fb5c975fa3\": rpc error: code = NotFound desc = could not find container \"5ec52d085c5740cb38d7873df1d836dee25db06826a98b85618034fb5c975fa3\": container with ID starting with 5ec52d085c5740cb38d7873df1d836dee25db06826a98b85618034fb5c975fa3 not found: ID does not exist" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.854497 4914 scope.go:117] "RemoveContainer" containerID="aff2b6eccb54222113aba572495f1de3864101a975836e5f6d7aa49cc33f5340" Jan 27 14:09:30 crc kubenswrapper[4914]: E0127 14:09:30.854914 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aff2b6eccb54222113aba572495f1de3864101a975836e5f6d7aa49cc33f5340\": container with ID starting with aff2b6eccb54222113aba572495f1de3864101a975836e5f6d7aa49cc33f5340 not found: ID does not exist" containerID="aff2b6eccb54222113aba572495f1de3864101a975836e5f6d7aa49cc33f5340" Jan 27 14:09:30 crc kubenswrapper[4914]: I0127 14:09:30.854949 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aff2b6eccb54222113aba572495f1de3864101a975836e5f6d7aa49cc33f5340"} err="failed to get container status \"aff2b6eccb54222113aba572495f1de3864101a975836e5f6d7aa49cc33f5340\": rpc error: code = NotFound desc = could not find container \"aff2b6eccb54222113aba572495f1de3864101a975836e5f6d7aa49cc33f5340\": container with ID starting with aff2b6eccb54222113aba572495f1de3864101a975836e5f6d7aa49cc33f5340 not found: ID does not exist" Jan 27 14:09:31 crc kubenswrapper[4914]: I0127 14:09:31.066138 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:09:31 crc kubenswrapper[4914]: I0127 14:09:31.066193 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:09:31 crc kubenswrapper[4914]: I0127 14:09:31.114802 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-wtzqw"] Jan 27 14:09:31 crc kubenswrapper[4914]: I0127 14:09:31.122708 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-wtzqw"] Jan 27 14:09:32 crc kubenswrapper[4914]: I0127 14:09:32.119272 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zk259" podUID="7da5aca4-c382-4626-a94d-932658438b32" containerName="registry-server" probeResult="failure" output=< Jan 27 14:09:32 crc kubenswrapper[4914]: timeout: failed to connect service ":50051" within 1s Jan 27 14:09:32 crc kubenswrapper[4914]: > Jan 27 14:09:32 crc kubenswrapper[4914]: I0127 14:09:32.309230 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12a4cf14-22d7-4495-9a6d-98138807d10e" path="/var/lib/kubelet/pods/12a4cf14-22d7-4495-9a6d-98138807d10e/volumes" Jan 27 14:09:37 crc kubenswrapper[4914]: I0127 14:09:37.690517 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:09:37 crc kubenswrapper[4914]: I0127 14:09:37.691127 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:09:39 crc kubenswrapper[4914]: I0127 14:09:39.872442 4914 generic.go:334] "Generic (PLEG): container finished" podID="fa420d84-09ad-44c4-9af0-fddfcab7501c" containerID="ae6f4932ee1a553cf9b79083e449649b01c1267647a8cf242d75d43e438917ed" exitCode=0 Jan 27 14:09:39 crc kubenswrapper[4914]: I0127 14:09:39.872531 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fa420d84-09ad-44c4-9af0-fddfcab7501c","Type":"ContainerDied","Data":"ae6f4932ee1a553cf9b79083e449649b01c1267647a8cf242d75d43e438917ed"} Jan 27 14:09:39 crc kubenswrapper[4914]: I0127 14:09:39.878634 4914 generic.go:334] "Generic (PLEG): container finished" podID="c3655c22-46a7-4ed5-bba1-4a294940777d" containerID="ca80b4dcadea311d8cfe5c094675b5316d43b41dfaa72e99ca4fd7bd79224c98" exitCode=0 Jan 27 14:09:39 crc kubenswrapper[4914]: I0127 14:09:39.878672 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c3655c22-46a7-4ed5-bba1-4a294940777d","Type":"ContainerDied","Data":"ca80b4dcadea311d8cfe5c094675b5316d43b41dfaa72e99ca4fd7bd79224c98"} Jan 27 14:09:40 crc kubenswrapper[4914]: I0127 14:09:40.888285 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fa420d84-09ad-44c4-9af0-fddfcab7501c","Type":"ContainerStarted","Data":"d81128fce7a7e369c2ee5078e8f9403cc5e7922c368e7e444c8d1853d6b750ec"} Jan 27 14:09:40 crc kubenswrapper[4914]: I0127 14:09:40.889017 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 14:09:40 crc kubenswrapper[4914]: I0127 14:09:40.890215 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c3655c22-46a7-4ed5-bba1-4a294940777d","Type":"ContainerStarted","Data":"3a54a2dce24d0ef439aca3888abcb9368fd200c444a4811e33ff83a89e781736"} Jan 27 14:09:40 crc kubenswrapper[4914]: I0127 14:09:40.890381 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:40 crc kubenswrapper[4914]: I0127 14:09:40.923777 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.923753594 podStartE2EDuration="36.923753594s" podCreationTimestamp="2026-01-27 14:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:09:40.911334182 +0000 UTC m=+1539.223684267" watchObservedRunningTime="2026-01-27 14:09:40.923753594 +0000 UTC m=+1539.236103679" Jan 27 14:09:40 crc kubenswrapper[4914]: I0127 14:09:40.942602 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.942580672 podStartE2EDuration="36.942580672s" podCreationTimestamp="2026-01-27 14:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:09:40.940923537 +0000 UTC m=+1539.253273622" watchObservedRunningTime="2026-01-27 14:09:40.942580672 +0000 UTC m=+1539.254930757" Jan 27 14:09:42 crc kubenswrapper[4914]: I0127 14:09:42.112778 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zk259" podUID="7da5aca4-c382-4626-a94d-932658438b32" containerName="registry-server" probeResult="failure" output=< Jan 27 14:09:42 crc kubenswrapper[4914]: timeout: failed to connect service ":50051" within 1s Jan 27 14:09:42 crc kubenswrapper[4914]: > Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.219250 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv"] Jan 27 14:09:43 crc kubenswrapper[4914]: E0127 14:09:43.220056 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a4cf14-22d7-4495-9a6d-98138807d10e" containerName="init" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.220073 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a4cf14-22d7-4495-9a6d-98138807d10e" containerName="init" Jan 27 14:09:43 crc kubenswrapper[4914]: E0127 14:09:43.220095 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a4cf14-22d7-4495-9a6d-98138807d10e" containerName="dnsmasq-dns" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.220106 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a4cf14-22d7-4495-9a6d-98138807d10e" containerName="dnsmasq-dns" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.220356 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a4cf14-22d7-4495-9a6d-98138807d10e" containerName="dnsmasq-dns" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.221110 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.228372 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m5jxs" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.228575 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.228714 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.228960 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.231638 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmzc9\" (UniqueName: \"kubernetes.io/projected/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-kube-api-access-tmzc9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv\" (UID: \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.231709 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv\" (UID: \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.231734 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv\" (UID: \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.231791 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv\" (UID: \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.235399 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv"] Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.616050 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv\" (UID: \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.616240 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmzc9\" (UniqueName: \"kubernetes.io/projected/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-kube-api-access-tmzc9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv\" (UID: \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.616310 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv\" (UID: \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.616351 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv\" (UID: \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.623472 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv\" (UID: \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.623551 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv\" (UID: \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.638892 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv\" (UID: \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.655703 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmzc9\" (UniqueName: \"kubernetes.io/projected/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-kube-api-access-tmzc9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv\" (UID: \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" Jan 27 14:09:43 crc kubenswrapper[4914]: I0127 14:09:43.849762 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" Jan 27 14:09:44 crc kubenswrapper[4914]: I0127 14:09:44.396212 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv"] Jan 27 14:09:44 crc kubenswrapper[4914]: I0127 14:09:44.926596 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" event={"ID":"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d","Type":"ContainerStarted","Data":"6a30ec035c0ec0cd7fe9b2e3786ae44cff84c50a0d4b01522a05d4c31b8cd2f7"} Jan 27 14:09:52 crc kubenswrapper[4914]: I0127 14:09:52.123175 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zk259" podUID="7da5aca4-c382-4626-a94d-932658438b32" containerName="registry-server" probeResult="failure" output=< Jan 27 14:09:52 crc kubenswrapper[4914]: timeout: failed to connect service ":50051" within 1s Jan 27 14:09:52 crc kubenswrapper[4914]: > Jan 27 14:09:54 crc kubenswrapper[4914]: I0127 14:09:54.863019 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:09:54 crc kubenswrapper[4914]: I0127 14:09:54.871037 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 14:09:59 crc kubenswrapper[4914]: I0127 14:09:59.062888 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" event={"ID":"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d","Type":"ContainerStarted","Data":"5c7e43c9d5a45a7a2999e7b43a18ff3c3e50bad204711ada241f19c3259d3ca3"} Jan 27 14:09:59 crc kubenswrapper[4914]: I0127 14:09:59.089947 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" podStartSLOduration=1.6984637839999999 podStartE2EDuration="16.08992974s" podCreationTimestamp="2026-01-27 14:09:43 +0000 UTC" firstStartedPulling="2026-01-27 14:09:44.403153223 +0000 UTC m=+1542.715503308" lastFinishedPulling="2026-01-27 14:09:58.794619179 +0000 UTC m=+1557.106969264" observedRunningTime="2026-01-27 14:09:59.076963433 +0000 UTC m=+1557.389313528" watchObservedRunningTime="2026-01-27 14:09:59.08992974 +0000 UTC m=+1557.402279825" Jan 27 14:10:01 crc kubenswrapper[4914]: I0127 14:10:01.114989 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:10:01 crc kubenswrapper[4914]: I0127 14:10:01.161013 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:10:01 crc kubenswrapper[4914]: I0127 14:10:01.352048 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zk259"] Jan 27 14:10:03 crc kubenswrapper[4914]: I0127 14:10:03.096353 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zk259" podUID="7da5aca4-c382-4626-a94d-932658438b32" containerName="registry-server" containerID="cri-o://371e48382344a148c0b46d0ecaf7b8309a37e9aefd70730ac7b3913a16d3693f" gracePeriod=2 Jan 27 14:10:03 crc kubenswrapper[4914]: I0127 14:10:03.618634 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:10:03 crc kubenswrapper[4914]: I0127 14:10:03.668418 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7da5aca4-c382-4626-a94d-932658438b32-catalog-content\") pod \"7da5aca4-c382-4626-a94d-932658438b32\" (UID: \"7da5aca4-c382-4626-a94d-932658438b32\") " Jan 27 14:10:03 crc kubenswrapper[4914]: I0127 14:10:03.668707 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xls65\" (UniqueName: \"kubernetes.io/projected/7da5aca4-c382-4626-a94d-932658438b32-kube-api-access-xls65\") pod \"7da5aca4-c382-4626-a94d-932658438b32\" (UID: \"7da5aca4-c382-4626-a94d-932658438b32\") " Jan 27 14:10:03 crc kubenswrapper[4914]: I0127 14:10:03.668745 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7da5aca4-c382-4626-a94d-932658438b32-utilities\") pod \"7da5aca4-c382-4626-a94d-932658438b32\" (UID: \"7da5aca4-c382-4626-a94d-932658438b32\") " Jan 27 14:10:03 crc kubenswrapper[4914]: I0127 14:10:03.669374 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7da5aca4-c382-4626-a94d-932658438b32-utilities" (OuterVolumeSpecName: "utilities") pod "7da5aca4-c382-4626-a94d-932658438b32" (UID: "7da5aca4-c382-4626-a94d-932658438b32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:10:03 crc kubenswrapper[4914]: I0127 14:10:03.674454 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7da5aca4-c382-4626-a94d-932658438b32-kube-api-access-xls65" (OuterVolumeSpecName: "kube-api-access-xls65") pod "7da5aca4-c382-4626-a94d-932658438b32" (UID: "7da5aca4-c382-4626-a94d-932658438b32"). InnerVolumeSpecName "kube-api-access-xls65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:10:03 crc kubenswrapper[4914]: I0127 14:10:03.770510 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xls65\" (UniqueName: \"kubernetes.io/projected/7da5aca4-c382-4626-a94d-932658438b32-kube-api-access-xls65\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:03 crc kubenswrapper[4914]: I0127 14:10:03.770544 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7da5aca4-c382-4626-a94d-932658438b32-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:03 crc kubenswrapper[4914]: I0127 14:10:03.772187 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7da5aca4-c382-4626-a94d-932658438b32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7da5aca4-c382-4626-a94d-932658438b32" (UID: "7da5aca4-c382-4626-a94d-932658438b32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:10:03 crc kubenswrapper[4914]: I0127 14:10:03.872800 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7da5aca4-c382-4626-a94d-932658438b32-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:04 crc kubenswrapper[4914]: I0127 14:10:04.108252 4914 generic.go:334] "Generic (PLEG): container finished" podID="7da5aca4-c382-4626-a94d-932658438b32" containerID="371e48382344a148c0b46d0ecaf7b8309a37e9aefd70730ac7b3913a16d3693f" exitCode=0 Jan 27 14:10:04 crc kubenswrapper[4914]: I0127 14:10:04.108300 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk259" event={"ID":"7da5aca4-c382-4626-a94d-932658438b32","Type":"ContainerDied","Data":"371e48382344a148c0b46d0ecaf7b8309a37e9aefd70730ac7b3913a16d3693f"} Jan 27 14:10:04 crc kubenswrapper[4914]: I0127 14:10:04.108310 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zk259" Jan 27 14:10:04 crc kubenswrapper[4914]: I0127 14:10:04.108370 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk259" event={"ID":"7da5aca4-c382-4626-a94d-932658438b32","Type":"ContainerDied","Data":"5ae5c2fc1987a21e0b2bac2a6eb7edc70dff6b4348019863d089d016ba68f3a5"} Jan 27 14:10:04 crc kubenswrapper[4914]: I0127 14:10:04.108393 4914 scope.go:117] "RemoveContainer" containerID="371e48382344a148c0b46d0ecaf7b8309a37e9aefd70730ac7b3913a16d3693f" Jan 27 14:10:04 crc kubenswrapper[4914]: I0127 14:10:04.128750 4914 scope.go:117] "RemoveContainer" containerID="3a408b47638a82220c4c062e0580c68d452d6ed2825d817319933562f0367200" Jan 27 14:10:04 crc kubenswrapper[4914]: I0127 14:10:04.147148 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zk259"] Jan 27 14:10:04 crc kubenswrapper[4914]: I0127 14:10:04.167535 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zk259"] Jan 27 14:10:04 crc kubenswrapper[4914]: I0127 14:10:04.168880 4914 scope.go:117] "RemoveContainer" containerID="d812598ac84019ec77443ce2213fda62e48ea41ed999ce70d2e6e5fc009c1cc4" Jan 27 14:10:04 crc kubenswrapper[4914]: I0127 14:10:04.204946 4914 scope.go:117] "RemoveContainer" containerID="371e48382344a148c0b46d0ecaf7b8309a37e9aefd70730ac7b3913a16d3693f" Jan 27 14:10:04 crc kubenswrapper[4914]: E0127 14:10:04.205548 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"371e48382344a148c0b46d0ecaf7b8309a37e9aefd70730ac7b3913a16d3693f\": container with ID starting with 371e48382344a148c0b46d0ecaf7b8309a37e9aefd70730ac7b3913a16d3693f not found: ID does not exist" containerID="371e48382344a148c0b46d0ecaf7b8309a37e9aefd70730ac7b3913a16d3693f" Jan 27 14:10:04 crc kubenswrapper[4914]: I0127 14:10:04.205623 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"371e48382344a148c0b46d0ecaf7b8309a37e9aefd70730ac7b3913a16d3693f"} err="failed to get container status \"371e48382344a148c0b46d0ecaf7b8309a37e9aefd70730ac7b3913a16d3693f\": rpc error: code = NotFound desc = could not find container \"371e48382344a148c0b46d0ecaf7b8309a37e9aefd70730ac7b3913a16d3693f\": container with ID starting with 371e48382344a148c0b46d0ecaf7b8309a37e9aefd70730ac7b3913a16d3693f not found: ID does not exist" Jan 27 14:10:04 crc kubenswrapper[4914]: I0127 14:10:04.205677 4914 scope.go:117] "RemoveContainer" containerID="3a408b47638a82220c4c062e0580c68d452d6ed2825d817319933562f0367200" Jan 27 14:10:04 crc kubenswrapper[4914]: E0127 14:10:04.205998 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a408b47638a82220c4c062e0580c68d452d6ed2825d817319933562f0367200\": container with ID starting with 3a408b47638a82220c4c062e0580c68d452d6ed2825d817319933562f0367200 not found: ID does not exist" containerID="3a408b47638a82220c4c062e0580c68d452d6ed2825d817319933562f0367200" Jan 27 14:10:04 crc kubenswrapper[4914]: I0127 14:10:04.206030 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a408b47638a82220c4c062e0580c68d452d6ed2825d817319933562f0367200"} err="failed to get container status \"3a408b47638a82220c4c062e0580c68d452d6ed2825d817319933562f0367200\": rpc error: code = NotFound desc = could not find container \"3a408b47638a82220c4c062e0580c68d452d6ed2825d817319933562f0367200\": container with ID starting with 3a408b47638a82220c4c062e0580c68d452d6ed2825d817319933562f0367200 not found: ID does not exist" Jan 27 14:10:04 crc kubenswrapper[4914]: I0127 14:10:04.206049 4914 scope.go:117] "RemoveContainer" containerID="d812598ac84019ec77443ce2213fda62e48ea41ed999ce70d2e6e5fc009c1cc4" Jan 27 14:10:04 crc kubenswrapper[4914]: E0127 14:10:04.206525 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d812598ac84019ec77443ce2213fda62e48ea41ed999ce70d2e6e5fc009c1cc4\": container with ID starting with d812598ac84019ec77443ce2213fda62e48ea41ed999ce70d2e6e5fc009c1cc4 not found: ID does not exist" containerID="d812598ac84019ec77443ce2213fda62e48ea41ed999ce70d2e6e5fc009c1cc4" Jan 27 14:10:04 crc kubenswrapper[4914]: I0127 14:10:04.206562 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d812598ac84019ec77443ce2213fda62e48ea41ed999ce70d2e6e5fc009c1cc4"} err="failed to get container status \"d812598ac84019ec77443ce2213fda62e48ea41ed999ce70d2e6e5fc009c1cc4\": rpc error: code = NotFound desc = could not find container \"d812598ac84019ec77443ce2213fda62e48ea41ed999ce70d2e6e5fc009c1cc4\": container with ID starting with d812598ac84019ec77443ce2213fda62e48ea41ed999ce70d2e6e5fc009c1cc4 not found: ID does not exist" Jan 27 14:10:04 crc kubenswrapper[4914]: I0127 14:10:04.304861 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7da5aca4-c382-4626-a94d-932658438b32" path="/var/lib/kubelet/pods/7da5aca4-c382-4626-a94d-932658438b32/volumes" Jan 27 14:10:07 crc kubenswrapper[4914]: I0127 14:10:07.691187 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:10:07 crc kubenswrapper[4914]: I0127 14:10:07.692260 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:10:07 crc kubenswrapper[4914]: I0127 14:10:07.692346 4914 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 14:10:07 crc kubenswrapper[4914]: I0127 14:10:07.693686 4914 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771"} pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:10:07 crc kubenswrapper[4914]: I0127 14:10:07.693760 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" containerID="cri-o://a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" gracePeriod=600 Jan 27 14:10:07 crc kubenswrapper[4914]: E0127 14:10:07.817661 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:10:08 crc kubenswrapper[4914]: I0127 14:10:08.149508 4914 generic.go:334] "Generic (PLEG): container finished" podID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" exitCode=0 Jan 27 14:10:08 crc kubenswrapper[4914]: I0127 14:10:08.149566 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerDied","Data":"a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771"} Jan 27 14:10:08 crc kubenswrapper[4914]: I0127 14:10:08.149655 4914 scope.go:117] "RemoveContainer" containerID="d7c21b1cd9cda80b642f46a096fe84b98a11cc182c636f7e3bfaf4ae3f160417" Jan 27 14:10:08 crc kubenswrapper[4914]: I0127 14:10:08.150450 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:10:08 crc kubenswrapper[4914]: E0127 14:10:08.151022 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:10:11 crc kubenswrapper[4914]: I0127 14:10:11.184442 4914 generic.go:334] "Generic (PLEG): container finished" podID="c2ce37bb-8f3e-42cf-8c80-fa1c3496354d" containerID="5c7e43c9d5a45a7a2999e7b43a18ff3c3e50bad204711ada241f19c3259d3ca3" exitCode=0 Jan 27 14:10:11 crc kubenswrapper[4914]: I0127 14:10:11.184526 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" event={"ID":"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d","Type":"ContainerDied","Data":"5c7e43c9d5a45a7a2999e7b43a18ff3c3e50bad204711ada241f19c3259d3ca3"} Jan 27 14:10:12 crc kubenswrapper[4914]: I0127 14:10:12.721290 4914 scope.go:117] "RemoveContainer" containerID="288e00b9021032445478efb7c4c5fdf89de3954d4aa42b5702d76ba9c7b60873" Jan 27 14:10:12 crc kubenswrapper[4914]: I0127 14:10:12.727653 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" Jan 27 14:10:12 crc kubenswrapper[4914]: I0127 14:10:12.767559 4914 scope.go:117] "RemoveContainer" containerID="cff9f1459685a45bc8c8fffc867e42a6e36b8d78a0fb5b986b715debedd9422f" Jan 27 14:10:12 crc kubenswrapper[4914]: I0127 14:10:12.839381 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-ssh-key-openstack-edpm-ipam\") pod \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\" (UID: \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\") " Jan 27 14:10:12 crc kubenswrapper[4914]: I0127 14:10:12.839452 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-inventory\") pod \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\" (UID: \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\") " Jan 27 14:10:12 crc kubenswrapper[4914]: I0127 14:10:12.839485 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmzc9\" (UniqueName: \"kubernetes.io/projected/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-kube-api-access-tmzc9\") pod \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\" (UID: \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\") " Jan 27 14:10:12 crc kubenswrapper[4914]: I0127 14:10:12.839527 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-repo-setup-combined-ca-bundle\") pod \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\" (UID: \"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d\") " Jan 27 14:10:12 crc kubenswrapper[4914]: I0127 14:10:12.845599 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c2ce37bb-8f3e-42cf-8c80-fa1c3496354d" (UID: "c2ce37bb-8f3e-42cf-8c80-fa1c3496354d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:10:12 crc kubenswrapper[4914]: I0127 14:10:12.845801 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-kube-api-access-tmzc9" (OuterVolumeSpecName: "kube-api-access-tmzc9") pod "c2ce37bb-8f3e-42cf-8c80-fa1c3496354d" (UID: "c2ce37bb-8f3e-42cf-8c80-fa1c3496354d"). InnerVolumeSpecName "kube-api-access-tmzc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:10:12 crc kubenswrapper[4914]: I0127 14:10:12.872866 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-inventory" (OuterVolumeSpecName: "inventory") pod "c2ce37bb-8f3e-42cf-8c80-fa1c3496354d" (UID: "c2ce37bb-8f3e-42cf-8c80-fa1c3496354d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:10:12 crc kubenswrapper[4914]: I0127 14:10:12.882093 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c2ce37bb-8f3e-42cf-8c80-fa1c3496354d" (UID: "c2ce37bb-8f3e-42cf-8c80-fa1c3496354d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:10:12 crc kubenswrapper[4914]: I0127 14:10:12.941287 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:12 crc kubenswrapper[4914]: I0127 14:10:12.941323 4914 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:12 crc kubenswrapper[4914]: I0127 14:10:12.941336 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmzc9\" (UniqueName: \"kubernetes.io/projected/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-kube-api-access-tmzc9\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:12 crc kubenswrapper[4914]: I0127 14:10:12.941347 4914 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ce37bb-8f3e-42cf-8c80-fa1c3496354d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.206422 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" event={"ID":"c2ce37bb-8f3e-42cf-8c80-fa1c3496354d","Type":"ContainerDied","Data":"6a30ec035c0ec0cd7fe9b2e3786ae44cff84c50a0d4b01522a05d4c31b8cd2f7"} Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.206783 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a30ec035c0ec0cd7fe9b2e3786ae44cff84c50a0d4b01522a05d4c31b8cd2f7" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.206820 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.305882 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8"] Jan 27 14:10:13 crc kubenswrapper[4914]: E0127 14:10:13.306365 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da5aca4-c382-4626-a94d-932658438b32" containerName="extract-content" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.306380 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da5aca4-c382-4626-a94d-932658438b32" containerName="extract-content" Jan 27 14:10:13 crc kubenswrapper[4914]: E0127 14:10:13.306407 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da5aca4-c382-4626-a94d-932658438b32" containerName="registry-server" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.306415 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da5aca4-c382-4626-a94d-932658438b32" containerName="registry-server" Jan 27 14:10:13 crc kubenswrapper[4914]: E0127 14:10:13.306427 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da5aca4-c382-4626-a94d-932658438b32" containerName="extract-utilities" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.306436 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da5aca4-c382-4626-a94d-932658438b32" containerName="extract-utilities" Jan 27 14:10:13 crc kubenswrapper[4914]: E0127 14:10:13.306462 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ce37bb-8f3e-42cf-8c80-fa1c3496354d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.306471 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ce37bb-8f3e-42cf-8c80-fa1c3496354d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.306690 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="7da5aca4-c382-4626-a94d-932658438b32" containerName="registry-server" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.306723 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2ce37bb-8f3e-42cf-8c80-fa1c3496354d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.307594 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.313826 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.313860 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.313893 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.314195 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m5jxs" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.319742 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8"] Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.353163 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rtxs8\" (UID: \"4498dcaf-b92d-48e5-9c54-8678b3d36f1b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.353221 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92czv\" (UniqueName: \"kubernetes.io/projected/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-kube-api-access-92czv\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rtxs8\" (UID: \"4498dcaf-b92d-48e5-9c54-8678b3d36f1b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.353754 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rtxs8\" (UID: \"4498dcaf-b92d-48e5-9c54-8678b3d36f1b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.455014 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rtxs8\" (UID: \"4498dcaf-b92d-48e5-9c54-8678b3d36f1b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.455068 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92czv\" (UniqueName: \"kubernetes.io/projected/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-kube-api-access-92czv\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rtxs8\" (UID: \"4498dcaf-b92d-48e5-9c54-8678b3d36f1b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.455173 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rtxs8\" (UID: \"4498dcaf-b92d-48e5-9c54-8678b3d36f1b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.459651 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rtxs8\" (UID: \"4498dcaf-b92d-48e5-9c54-8678b3d36f1b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.459854 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rtxs8\" (UID: \"4498dcaf-b92d-48e5-9c54-8678b3d36f1b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.472373 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92czv\" (UniqueName: \"kubernetes.io/projected/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-kube-api-access-92czv\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rtxs8\" (UID: \"4498dcaf-b92d-48e5-9c54-8678b3d36f1b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" Jan 27 14:10:13 crc kubenswrapper[4914]: I0127 14:10:13.634439 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" Jan 27 14:10:14 crc kubenswrapper[4914]: I0127 14:10:14.164448 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8"] Jan 27 14:10:14 crc kubenswrapper[4914]: W0127 14:10:14.165267 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4498dcaf_b92d_48e5_9c54_8678b3d36f1b.slice/crio-481ecccdc53baece620280236b03d8c697acaab638e67dd8037b8df9a62d1c3f WatchSource:0}: Error finding container 481ecccdc53baece620280236b03d8c697acaab638e67dd8037b8df9a62d1c3f: Status 404 returned error can't find the container with id 481ecccdc53baece620280236b03d8c697acaab638e67dd8037b8df9a62d1c3f Jan 27 14:10:14 crc kubenswrapper[4914]: I0127 14:10:14.220996 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" event={"ID":"4498dcaf-b92d-48e5-9c54-8678b3d36f1b","Type":"ContainerStarted","Data":"481ecccdc53baece620280236b03d8c697acaab638e67dd8037b8df9a62d1c3f"} Jan 27 14:10:16 crc kubenswrapper[4914]: I0127 14:10:16.238749 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" event={"ID":"4498dcaf-b92d-48e5-9c54-8678b3d36f1b","Type":"ContainerStarted","Data":"1b81fc46d489144b754a2349c6d3f14fb41ea4109e9f076735b1b10d7254dca0"} Jan 27 14:10:16 crc kubenswrapper[4914]: I0127 14:10:16.262643 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" podStartSLOduration=1.5541874930000001 podStartE2EDuration="3.262625132s" podCreationTimestamp="2026-01-27 14:10:13 +0000 UTC" firstStartedPulling="2026-01-27 14:10:14.168931406 +0000 UTC m=+1572.481281491" lastFinishedPulling="2026-01-27 14:10:15.877369045 +0000 UTC m=+1574.189719130" observedRunningTime="2026-01-27 14:10:16.254910139 +0000 UTC m=+1574.567260224" watchObservedRunningTime="2026-01-27 14:10:16.262625132 +0000 UTC m=+1574.574975217" Jan 27 14:10:19 crc kubenswrapper[4914]: I0127 14:10:19.266482 4914 generic.go:334] "Generic (PLEG): container finished" podID="4498dcaf-b92d-48e5-9c54-8678b3d36f1b" containerID="1b81fc46d489144b754a2349c6d3f14fb41ea4109e9f076735b1b10d7254dca0" exitCode=0 Jan 27 14:10:19 crc kubenswrapper[4914]: I0127 14:10:19.266524 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" event={"ID":"4498dcaf-b92d-48e5-9c54-8678b3d36f1b","Type":"ContainerDied","Data":"1b81fc46d489144b754a2349c6d3f14fb41ea4109e9f076735b1b10d7254dca0"} Jan 27 14:10:20 crc kubenswrapper[4914]: I0127 14:10:20.691853 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" Jan 27 14:10:20 crc kubenswrapper[4914]: I0127 14:10:20.834065 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-inventory\") pod \"4498dcaf-b92d-48e5-9c54-8678b3d36f1b\" (UID: \"4498dcaf-b92d-48e5-9c54-8678b3d36f1b\") " Jan 27 14:10:20 crc kubenswrapper[4914]: I0127 14:10:20.834461 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92czv\" (UniqueName: \"kubernetes.io/projected/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-kube-api-access-92czv\") pod \"4498dcaf-b92d-48e5-9c54-8678b3d36f1b\" (UID: \"4498dcaf-b92d-48e5-9c54-8678b3d36f1b\") " Jan 27 14:10:20 crc kubenswrapper[4914]: I0127 14:10:20.834522 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-ssh-key-openstack-edpm-ipam\") pod \"4498dcaf-b92d-48e5-9c54-8678b3d36f1b\" (UID: \"4498dcaf-b92d-48e5-9c54-8678b3d36f1b\") " Jan 27 14:10:20 crc kubenswrapper[4914]: I0127 14:10:20.842107 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-kube-api-access-92czv" (OuterVolumeSpecName: "kube-api-access-92czv") pod "4498dcaf-b92d-48e5-9c54-8678b3d36f1b" (UID: "4498dcaf-b92d-48e5-9c54-8678b3d36f1b"). InnerVolumeSpecName "kube-api-access-92czv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:10:20 crc kubenswrapper[4914]: I0127 14:10:20.867903 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-inventory" (OuterVolumeSpecName: "inventory") pod "4498dcaf-b92d-48e5-9c54-8678b3d36f1b" (UID: "4498dcaf-b92d-48e5-9c54-8678b3d36f1b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:10:20 crc kubenswrapper[4914]: I0127 14:10:20.869734 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4498dcaf-b92d-48e5-9c54-8678b3d36f1b" (UID: "4498dcaf-b92d-48e5-9c54-8678b3d36f1b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:10:20 crc kubenswrapper[4914]: I0127 14:10:20.938166 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92czv\" (UniqueName: \"kubernetes.io/projected/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-kube-api-access-92czv\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:20 crc kubenswrapper[4914]: I0127 14:10:20.938204 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:20 crc kubenswrapper[4914]: I0127 14:10:20.938222 4914 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4498dcaf-b92d-48e5-9c54-8678b3d36f1b-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.288992 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" event={"ID":"4498dcaf-b92d-48e5-9c54-8678b3d36f1b","Type":"ContainerDied","Data":"481ecccdc53baece620280236b03d8c697acaab638e67dd8037b8df9a62d1c3f"} Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.289028 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="481ecccdc53baece620280236b03d8c697acaab638e67dd8037b8df9a62d1c3f" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.289042 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rtxs8" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.358743 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc"] Jan 27 14:10:21 crc kubenswrapper[4914]: E0127 14:10:21.359177 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4498dcaf-b92d-48e5-9c54-8678b3d36f1b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.359191 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="4498dcaf-b92d-48e5-9c54-8678b3d36f1b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.359379 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="4498dcaf-b92d-48e5-9c54-8678b3d36f1b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.359967 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.362571 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.362743 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.363093 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m5jxs" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.363211 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.382118 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc"] Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.446779 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgqr9\" (UniqueName: \"kubernetes.io/projected/77a76dd2-a27e-4755-881f-3472edf77cd6-kube-api-access-tgqr9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc\" (UID: \"77a76dd2-a27e-4755-881f-3472edf77cd6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.447027 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc\" (UID: \"77a76dd2-a27e-4755-881f-3472edf77cd6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.447213 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc\" (UID: \"77a76dd2-a27e-4755-881f-3472edf77cd6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.447672 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc\" (UID: \"77a76dd2-a27e-4755-881f-3472edf77cd6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.549888 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc\" (UID: \"77a76dd2-a27e-4755-881f-3472edf77cd6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.549957 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc\" (UID: \"77a76dd2-a27e-4755-881f-3472edf77cd6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.550102 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc\" (UID: \"77a76dd2-a27e-4755-881f-3472edf77cd6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.550154 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgqr9\" (UniqueName: \"kubernetes.io/projected/77a76dd2-a27e-4755-881f-3472edf77cd6-kube-api-access-tgqr9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc\" (UID: \"77a76dd2-a27e-4755-881f-3472edf77cd6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.553764 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc\" (UID: \"77a76dd2-a27e-4755-881f-3472edf77cd6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.553941 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc\" (UID: \"77a76dd2-a27e-4755-881f-3472edf77cd6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.554532 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc\" (UID: \"77a76dd2-a27e-4755-881f-3472edf77cd6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.566754 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgqr9\" (UniqueName: \"kubernetes.io/projected/77a76dd2-a27e-4755-881f-3472edf77cd6-kube-api-access-tgqr9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc\" (UID: \"77a76dd2-a27e-4755-881f-3472edf77cd6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" Jan 27 14:10:21 crc kubenswrapper[4914]: I0127 14:10:21.681022 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" Jan 27 14:10:22 crc kubenswrapper[4914]: W0127 14:10:22.224896 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77a76dd2_a27e_4755_881f_3472edf77cd6.slice/crio-030da0036aa753d6bab4cac55a9f4e35b09a8474271b152820593cb816f990cf WatchSource:0}: Error finding container 030da0036aa753d6bab4cac55a9f4e35b09a8474271b152820593cb816f990cf: Status 404 returned error can't find the container with id 030da0036aa753d6bab4cac55a9f4e35b09a8474271b152820593cb816f990cf Jan 27 14:10:22 crc kubenswrapper[4914]: I0127 14:10:22.230219 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc"] Jan 27 14:10:22 crc kubenswrapper[4914]: I0127 14:10:22.309521 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" event={"ID":"77a76dd2-a27e-4755-881f-3472edf77cd6","Type":"ContainerStarted","Data":"030da0036aa753d6bab4cac55a9f4e35b09a8474271b152820593cb816f990cf"} Jan 27 14:10:23 crc kubenswrapper[4914]: I0127 14:10:23.296723 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:10:23 crc kubenswrapper[4914]: E0127 14:10:23.297409 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:10:23 crc kubenswrapper[4914]: I0127 14:10:23.312881 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" event={"ID":"77a76dd2-a27e-4755-881f-3472edf77cd6","Type":"ContainerStarted","Data":"4e023a5d32549e912230382fb121126b075abea26901bf9595192ce3cd5b4928"} Jan 27 14:10:23 crc kubenswrapper[4914]: I0127 14:10:23.340321 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" podStartSLOduration=1.826065345 podStartE2EDuration="2.340303515s" podCreationTimestamp="2026-01-27 14:10:21 +0000 UTC" firstStartedPulling="2026-01-27 14:10:22.229327385 +0000 UTC m=+1580.541677480" lastFinishedPulling="2026-01-27 14:10:22.743565525 +0000 UTC m=+1581.055915650" observedRunningTime="2026-01-27 14:10:23.331662137 +0000 UTC m=+1581.644012242" watchObservedRunningTime="2026-01-27 14:10:23.340303515 +0000 UTC m=+1581.652653600" Jan 27 14:10:35 crc kubenswrapper[4914]: I0127 14:10:35.294953 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:10:35 crc kubenswrapper[4914]: E0127 14:10:35.296118 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:10:49 crc kubenswrapper[4914]: I0127 14:10:49.295732 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:10:49 crc kubenswrapper[4914]: E0127 14:10:49.296938 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:11:02 crc kubenswrapper[4914]: I0127 14:11:02.303164 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:11:02 crc kubenswrapper[4914]: E0127 14:11:02.304046 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:11:12 crc kubenswrapper[4914]: I0127 14:11:12.943096 4914 scope.go:117] "RemoveContainer" containerID="e2bc8afa15b9715d930f1d631bac4f165a1567fa763f778fd96f02ded1d467f1" Jan 27 14:11:16 crc kubenswrapper[4914]: I0127 14:11:16.336192 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:11:16 crc kubenswrapper[4914]: E0127 14:11:16.337151 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:11:31 crc kubenswrapper[4914]: I0127 14:11:31.294542 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:11:31 crc kubenswrapper[4914]: E0127 14:11:31.295291 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:11:41 crc kubenswrapper[4914]: I0127 14:11:41.468514 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-55v6c"] Jan 27 14:11:41 crc kubenswrapper[4914]: I0127 14:11:41.470906 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:11:41 crc kubenswrapper[4914]: I0127 14:11:41.481937 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-55v6c"] Jan 27 14:11:41 crc kubenswrapper[4914]: I0127 14:11:41.620016 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a6b14e-7a82-40de-8355-f6df5260b2bf-catalog-content\") pod \"community-operators-55v6c\" (UID: \"45a6b14e-7a82-40de-8355-f6df5260b2bf\") " pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:11:41 crc kubenswrapper[4914]: I0127 14:11:41.620121 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a6b14e-7a82-40de-8355-f6df5260b2bf-utilities\") pod \"community-operators-55v6c\" (UID: \"45a6b14e-7a82-40de-8355-f6df5260b2bf\") " pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:11:41 crc kubenswrapper[4914]: I0127 14:11:41.620172 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbfsr\" (UniqueName: \"kubernetes.io/projected/45a6b14e-7a82-40de-8355-f6df5260b2bf-kube-api-access-dbfsr\") pod \"community-operators-55v6c\" (UID: \"45a6b14e-7a82-40de-8355-f6df5260b2bf\") " pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:11:41 crc kubenswrapper[4914]: I0127 14:11:41.722272 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbfsr\" (UniqueName: \"kubernetes.io/projected/45a6b14e-7a82-40de-8355-f6df5260b2bf-kube-api-access-dbfsr\") pod \"community-operators-55v6c\" (UID: \"45a6b14e-7a82-40de-8355-f6df5260b2bf\") " pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:11:41 crc kubenswrapper[4914]: I0127 14:11:41.722369 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a6b14e-7a82-40de-8355-f6df5260b2bf-catalog-content\") pod \"community-operators-55v6c\" (UID: \"45a6b14e-7a82-40de-8355-f6df5260b2bf\") " pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:11:41 crc kubenswrapper[4914]: I0127 14:11:41.722435 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a6b14e-7a82-40de-8355-f6df5260b2bf-utilities\") pod \"community-operators-55v6c\" (UID: \"45a6b14e-7a82-40de-8355-f6df5260b2bf\") " pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:11:41 crc kubenswrapper[4914]: I0127 14:11:41.722900 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a6b14e-7a82-40de-8355-f6df5260b2bf-utilities\") pod \"community-operators-55v6c\" (UID: \"45a6b14e-7a82-40de-8355-f6df5260b2bf\") " pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:11:41 crc kubenswrapper[4914]: I0127 14:11:41.723398 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a6b14e-7a82-40de-8355-f6df5260b2bf-catalog-content\") pod \"community-operators-55v6c\" (UID: \"45a6b14e-7a82-40de-8355-f6df5260b2bf\") " pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:11:41 crc kubenswrapper[4914]: I0127 14:11:41.743019 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbfsr\" (UniqueName: \"kubernetes.io/projected/45a6b14e-7a82-40de-8355-f6df5260b2bf-kube-api-access-dbfsr\") pod \"community-operators-55v6c\" (UID: \"45a6b14e-7a82-40de-8355-f6df5260b2bf\") " pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:11:41 crc kubenswrapper[4914]: I0127 14:11:41.799850 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:11:42 crc kubenswrapper[4914]: I0127 14:11:42.326522 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-55v6c"] Jan 27 14:11:43 crc kubenswrapper[4914]: I0127 14:11:43.114814 4914 generic.go:334] "Generic (PLEG): container finished" podID="45a6b14e-7a82-40de-8355-f6df5260b2bf" containerID="870478afc53076737ace8295ee38ce1882c0c203a56289245e7ca7913521c439" exitCode=0 Jan 27 14:11:43 crc kubenswrapper[4914]: I0127 14:11:43.114877 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-55v6c" event={"ID":"45a6b14e-7a82-40de-8355-f6df5260b2bf","Type":"ContainerDied","Data":"870478afc53076737ace8295ee38ce1882c0c203a56289245e7ca7913521c439"} Jan 27 14:11:43 crc kubenswrapper[4914]: I0127 14:11:43.114902 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-55v6c" event={"ID":"45a6b14e-7a82-40de-8355-f6df5260b2bf","Type":"ContainerStarted","Data":"893d1262fa8da4497c3588f5729f9c1a15182acd65c5d9b25f039396161a43d1"} Jan 27 14:11:43 crc kubenswrapper[4914]: I0127 14:11:43.294063 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:11:43 crc kubenswrapper[4914]: E0127 14:11:43.294322 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:11:45 crc kubenswrapper[4914]: I0127 14:11:45.856020 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5bvgx"] Jan 27 14:11:45 crc kubenswrapper[4914]: I0127 14:11:45.858812 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:45 crc kubenswrapper[4914]: I0127 14:11:45.888929 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5bvgx"] Jan 27 14:11:46 crc kubenswrapper[4914]: I0127 14:11:46.006873 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz6tk\" (UniqueName: \"kubernetes.io/projected/60208338-faea-48a5-a5e9-2b14616802c1-kube-api-access-rz6tk\") pod \"redhat-marketplace-5bvgx\" (UID: \"60208338-faea-48a5-a5e9-2b14616802c1\") " pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:46 crc kubenswrapper[4914]: I0127 14:11:46.007193 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60208338-faea-48a5-a5e9-2b14616802c1-utilities\") pod \"redhat-marketplace-5bvgx\" (UID: \"60208338-faea-48a5-a5e9-2b14616802c1\") " pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:46 crc kubenswrapper[4914]: I0127 14:11:46.007415 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60208338-faea-48a5-a5e9-2b14616802c1-catalog-content\") pod \"redhat-marketplace-5bvgx\" (UID: \"60208338-faea-48a5-a5e9-2b14616802c1\") " pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:46 crc kubenswrapper[4914]: I0127 14:11:46.108561 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60208338-faea-48a5-a5e9-2b14616802c1-utilities\") pod \"redhat-marketplace-5bvgx\" (UID: \"60208338-faea-48a5-a5e9-2b14616802c1\") " pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:46 crc kubenswrapper[4914]: I0127 14:11:46.108660 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60208338-faea-48a5-a5e9-2b14616802c1-catalog-content\") pod \"redhat-marketplace-5bvgx\" (UID: \"60208338-faea-48a5-a5e9-2b14616802c1\") " pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:46 crc kubenswrapper[4914]: I0127 14:11:46.108702 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz6tk\" (UniqueName: \"kubernetes.io/projected/60208338-faea-48a5-a5e9-2b14616802c1-kube-api-access-rz6tk\") pod \"redhat-marketplace-5bvgx\" (UID: \"60208338-faea-48a5-a5e9-2b14616802c1\") " pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:46 crc kubenswrapper[4914]: I0127 14:11:46.109383 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60208338-faea-48a5-a5e9-2b14616802c1-utilities\") pod \"redhat-marketplace-5bvgx\" (UID: \"60208338-faea-48a5-a5e9-2b14616802c1\") " pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:46 crc kubenswrapper[4914]: I0127 14:11:46.109399 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60208338-faea-48a5-a5e9-2b14616802c1-catalog-content\") pod \"redhat-marketplace-5bvgx\" (UID: \"60208338-faea-48a5-a5e9-2b14616802c1\") " pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:46 crc kubenswrapper[4914]: I0127 14:11:46.130155 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz6tk\" (UniqueName: \"kubernetes.io/projected/60208338-faea-48a5-a5e9-2b14616802c1-kube-api-access-rz6tk\") pod \"redhat-marketplace-5bvgx\" (UID: \"60208338-faea-48a5-a5e9-2b14616802c1\") " pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:46 crc kubenswrapper[4914]: I0127 14:11:46.177727 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:46 crc kubenswrapper[4914]: W0127 14:11:46.661696 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60208338_faea_48a5_a5e9_2b14616802c1.slice/crio-89182381a04b4fed14cad28d657b2511bd3952064d4342e342949c38b4abb9fd WatchSource:0}: Error finding container 89182381a04b4fed14cad28d657b2511bd3952064d4342e342949c38b4abb9fd: Status 404 returned error can't find the container with id 89182381a04b4fed14cad28d657b2511bd3952064d4342e342949c38b4abb9fd Jan 27 14:11:46 crc kubenswrapper[4914]: I0127 14:11:46.663226 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5bvgx"] Jan 27 14:11:47 crc kubenswrapper[4914]: I0127 14:11:47.159096 4914 generic.go:334] "Generic (PLEG): container finished" podID="60208338-faea-48a5-a5e9-2b14616802c1" containerID="9ba11a7a33b6785fcfd66dfed9016af0ad1b00e443a7aece19bfd0b22835535b" exitCode=0 Jan 27 14:11:47 crc kubenswrapper[4914]: I0127 14:11:47.159193 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5bvgx" event={"ID":"60208338-faea-48a5-a5e9-2b14616802c1","Type":"ContainerDied","Data":"9ba11a7a33b6785fcfd66dfed9016af0ad1b00e443a7aece19bfd0b22835535b"} Jan 27 14:11:47 crc kubenswrapper[4914]: I0127 14:11:47.159481 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5bvgx" event={"ID":"60208338-faea-48a5-a5e9-2b14616802c1","Type":"ContainerStarted","Data":"89182381a04b4fed14cad28d657b2511bd3952064d4342e342949c38b4abb9fd"} Jan 27 14:11:48 crc kubenswrapper[4914]: I0127 14:11:48.169288 4914 generic.go:334] "Generic (PLEG): container finished" podID="60208338-faea-48a5-a5e9-2b14616802c1" containerID="9090ec2b35b244e1bcac800f7fb3b344745fd4dc98717040d10ae7fc4facd964" exitCode=0 Jan 27 14:11:48 crc kubenswrapper[4914]: I0127 14:11:48.169348 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5bvgx" event={"ID":"60208338-faea-48a5-a5e9-2b14616802c1","Type":"ContainerDied","Data":"9090ec2b35b244e1bcac800f7fb3b344745fd4dc98717040d10ae7fc4facd964"} Jan 27 14:11:49 crc kubenswrapper[4914]: I0127 14:11:49.180362 4914 generic.go:334] "Generic (PLEG): container finished" podID="45a6b14e-7a82-40de-8355-f6df5260b2bf" containerID="952da7292e485e14cfb341633518dcd772e4fc28b65f0aa6e6a7887f453223de" exitCode=0 Jan 27 14:11:49 crc kubenswrapper[4914]: I0127 14:11:49.180565 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-55v6c" event={"ID":"45a6b14e-7a82-40de-8355-f6df5260b2bf","Type":"ContainerDied","Data":"952da7292e485e14cfb341633518dcd772e4fc28b65f0aa6e6a7887f453223de"} Jan 27 14:11:49 crc kubenswrapper[4914]: I0127 14:11:49.193211 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5bvgx" event={"ID":"60208338-faea-48a5-a5e9-2b14616802c1","Type":"ContainerStarted","Data":"d91be0e8a30da17612b02664f6de28177f27ce95fec0e01493d85870dee0228a"} Jan 27 14:11:49 crc kubenswrapper[4914]: I0127 14:11:49.216544 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5bvgx" podStartSLOduration=2.772957851 podStartE2EDuration="4.216526313s" podCreationTimestamp="2026-01-27 14:11:45 +0000 UTC" firstStartedPulling="2026-01-27 14:11:47.160598324 +0000 UTC m=+1665.472948409" lastFinishedPulling="2026-01-27 14:11:48.604166786 +0000 UTC m=+1666.916516871" observedRunningTime="2026-01-27 14:11:49.213596432 +0000 UTC m=+1667.525946517" watchObservedRunningTime="2026-01-27 14:11:49.216526313 +0000 UTC m=+1667.528876398" Jan 27 14:11:50 crc kubenswrapper[4914]: I0127 14:11:50.202670 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-55v6c" event={"ID":"45a6b14e-7a82-40de-8355-f6df5260b2bf","Type":"ContainerStarted","Data":"135c7ae2c473a8ad1a5a244d840b4734c8c19a1e1bfeb8d98464cd0b66d08225"} Jan 27 14:11:50 crc kubenswrapper[4914]: I0127 14:11:50.223063 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-55v6c" podStartSLOduration=2.724304178 podStartE2EDuration="9.223041983s" podCreationTimestamp="2026-01-27 14:11:41 +0000 UTC" firstStartedPulling="2026-01-27 14:11:43.117307499 +0000 UTC m=+1661.429657584" lastFinishedPulling="2026-01-27 14:11:49.616045304 +0000 UTC m=+1667.928395389" observedRunningTime="2026-01-27 14:11:50.21897119 +0000 UTC m=+1668.531321275" watchObservedRunningTime="2026-01-27 14:11:50.223041983 +0000 UTC m=+1668.535392068" Jan 27 14:11:51 crc kubenswrapper[4914]: I0127 14:11:51.800907 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:11:51 crc kubenswrapper[4914]: I0127 14:11:51.801213 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:11:51 crc kubenswrapper[4914]: I0127 14:11:51.867288 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:11:55 crc kubenswrapper[4914]: I0127 14:11:55.295671 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:11:55 crc kubenswrapper[4914]: E0127 14:11:55.296476 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:11:56 crc kubenswrapper[4914]: I0127 14:11:56.178112 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:56 crc kubenswrapper[4914]: I0127 14:11:56.178416 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:56 crc kubenswrapper[4914]: I0127 14:11:56.227696 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:56 crc kubenswrapper[4914]: I0127 14:11:56.342271 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:56 crc kubenswrapper[4914]: I0127 14:11:56.471711 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5bvgx"] Jan 27 14:11:58 crc kubenswrapper[4914]: I0127 14:11:58.280376 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5bvgx" podUID="60208338-faea-48a5-a5e9-2b14616802c1" containerName="registry-server" containerID="cri-o://d91be0e8a30da17612b02664f6de28177f27ce95fec0e01493d85870dee0228a" gracePeriod=2 Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.272991 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.316157 4914 generic.go:334] "Generic (PLEG): container finished" podID="60208338-faea-48a5-a5e9-2b14616802c1" containerID="d91be0e8a30da17612b02664f6de28177f27ce95fec0e01493d85870dee0228a" exitCode=0 Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.316219 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5bvgx" event={"ID":"60208338-faea-48a5-a5e9-2b14616802c1","Type":"ContainerDied","Data":"d91be0e8a30da17612b02664f6de28177f27ce95fec0e01493d85870dee0228a"} Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.316254 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5bvgx" event={"ID":"60208338-faea-48a5-a5e9-2b14616802c1","Type":"ContainerDied","Data":"89182381a04b4fed14cad28d657b2511bd3952064d4342e342949c38b4abb9fd"} Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.316287 4914 scope.go:117] "RemoveContainer" containerID="d91be0e8a30da17612b02664f6de28177f27ce95fec0e01493d85870dee0228a" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.316507 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5bvgx" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.337590 4914 scope.go:117] "RemoveContainer" containerID="9090ec2b35b244e1bcac800f7fb3b344745fd4dc98717040d10ae7fc4facd964" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.358516 4914 scope.go:117] "RemoveContainer" containerID="9ba11a7a33b6785fcfd66dfed9016af0ad1b00e443a7aece19bfd0b22835535b" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.359062 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60208338-faea-48a5-a5e9-2b14616802c1-catalog-content\") pod \"60208338-faea-48a5-a5e9-2b14616802c1\" (UID: \"60208338-faea-48a5-a5e9-2b14616802c1\") " Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.359113 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz6tk\" (UniqueName: \"kubernetes.io/projected/60208338-faea-48a5-a5e9-2b14616802c1-kube-api-access-rz6tk\") pod \"60208338-faea-48a5-a5e9-2b14616802c1\" (UID: \"60208338-faea-48a5-a5e9-2b14616802c1\") " Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.359190 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60208338-faea-48a5-a5e9-2b14616802c1-utilities\") pod \"60208338-faea-48a5-a5e9-2b14616802c1\" (UID: \"60208338-faea-48a5-a5e9-2b14616802c1\") " Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.360420 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60208338-faea-48a5-a5e9-2b14616802c1-utilities" (OuterVolumeSpecName: "utilities") pod "60208338-faea-48a5-a5e9-2b14616802c1" (UID: "60208338-faea-48a5-a5e9-2b14616802c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.365824 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60208338-faea-48a5-a5e9-2b14616802c1-kube-api-access-rz6tk" (OuterVolumeSpecName: "kube-api-access-rz6tk") pod "60208338-faea-48a5-a5e9-2b14616802c1" (UID: "60208338-faea-48a5-a5e9-2b14616802c1"). InnerVolumeSpecName "kube-api-access-rz6tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.379912 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60208338-faea-48a5-a5e9-2b14616802c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60208338-faea-48a5-a5e9-2b14616802c1" (UID: "60208338-faea-48a5-a5e9-2b14616802c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.441158 4914 scope.go:117] "RemoveContainer" containerID="d91be0e8a30da17612b02664f6de28177f27ce95fec0e01493d85870dee0228a" Jan 27 14:11:59 crc kubenswrapper[4914]: E0127 14:11:59.441558 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d91be0e8a30da17612b02664f6de28177f27ce95fec0e01493d85870dee0228a\": container with ID starting with d91be0e8a30da17612b02664f6de28177f27ce95fec0e01493d85870dee0228a not found: ID does not exist" containerID="d91be0e8a30da17612b02664f6de28177f27ce95fec0e01493d85870dee0228a" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.441598 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d91be0e8a30da17612b02664f6de28177f27ce95fec0e01493d85870dee0228a"} err="failed to get container status \"d91be0e8a30da17612b02664f6de28177f27ce95fec0e01493d85870dee0228a\": rpc error: code = NotFound desc = could not find container \"d91be0e8a30da17612b02664f6de28177f27ce95fec0e01493d85870dee0228a\": container with ID starting with d91be0e8a30da17612b02664f6de28177f27ce95fec0e01493d85870dee0228a not found: ID does not exist" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.441624 4914 scope.go:117] "RemoveContainer" containerID="9090ec2b35b244e1bcac800f7fb3b344745fd4dc98717040d10ae7fc4facd964" Jan 27 14:11:59 crc kubenswrapper[4914]: E0127 14:11:59.442037 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9090ec2b35b244e1bcac800f7fb3b344745fd4dc98717040d10ae7fc4facd964\": container with ID starting with 9090ec2b35b244e1bcac800f7fb3b344745fd4dc98717040d10ae7fc4facd964 not found: ID does not exist" containerID="9090ec2b35b244e1bcac800f7fb3b344745fd4dc98717040d10ae7fc4facd964" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.442074 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9090ec2b35b244e1bcac800f7fb3b344745fd4dc98717040d10ae7fc4facd964"} err="failed to get container status \"9090ec2b35b244e1bcac800f7fb3b344745fd4dc98717040d10ae7fc4facd964\": rpc error: code = NotFound desc = could not find container \"9090ec2b35b244e1bcac800f7fb3b344745fd4dc98717040d10ae7fc4facd964\": container with ID starting with 9090ec2b35b244e1bcac800f7fb3b344745fd4dc98717040d10ae7fc4facd964 not found: ID does not exist" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.442092 4914 scope.go:117] "RemoveContainer" containerID="9ba11a7a33b6785fcfd66dfed9016af0ad1b00e443a7aece19bfd0b22835535b" Jan 27 14:11:59 crc kubenswrapper[4914]: E0127 14:11:59.442331 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ba11a7a33b6785fcfd66dfed9016af0ad1b00e443a7aece19bfd0b22835535b\": container with ID starting with 9ba11a7a33b6785fcfd66dfed9016af0ad1b00e443a7aece19bfd0b22835535b not found: ID does not exist" containerID="9ba11a7a33b6785fcfd66dfed9016af0ad1b00e443a7aece19bfd0b22835535b" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.442366 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba11a7a33b6785fcfd66dfed9016af0ad1b00e443a7aece19bfd0b22835535b"} err="failed to get container status \"9ba11a7a33b6785fcfd66dfed9016af0ad1b00e443a7aece19bfd0b22835535b\": rpc error: code = NotFound desc = could not find container \"9ba11a7a33b6785fcfd66dfed9016af0ad1b00e443a7aece19bfd0b22835535b\": container with ID starting with 9ba11a7a33b6785fcfd66dfed9016af0ad1b00e443a7aece19bfd0b22835535b not found: ID does not exist" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.461594 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60208338-faea-48a5-a5e9-2b14616802c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.461633 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz6tk\" (UniqueName: \"kubernetes.io/projected/60208338-faea-48a5-a5e9-2b14616802c1-kube-api-access-rz6tk\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.461649 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60208338-faea-48a5-a5e9-2b14616802c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.658201 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5bvgx"] Jan 27 14:11:59 crc kubenswrapper[4914]: I0127 14:11:59.672082 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5bvgx"] Jan 27 14:12:00 crc kubenswrapper[4914]: I0127 14:12:00.303935 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60208338-faea-48a5-a5e9-2b14616802c1" path="/var/lib/kubelet/pods/60208338-faea-48a5-a5e9-2b14616802c1/volumes" Jan 27 14:12:01 crc kubenswrapper[4914]: I0127 14:12:01.847106 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:12:01 crc kubenswrapper[4914]: I0127 14:12:01.899652 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-55v6c"] Jan 27 14:12:02 crc kubenswrapper[4914]: I0127 14:12:02.360861 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-55v6c" podUID="45a6b14e-7a82-40de-8355-f6df5260b2bf" containerName="registry-server" containerID="cri-o://135c7ae2c473a8ad1a5a244d840b4734c8c19a1e1bfeb8d98464cd0b66d08225" gracePeriod=2 Jan 27 14:12:02 crc kubenswrapper[4914]: I0127 14:12:02.772202 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:12:02 crc kubenswrapper[4914]: I0127 14:12:02.970702 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a6b14e-7a82-40de-8355-f6df5260b2bf-utilities\") pod \"45a6b14e-7a82-40de-8355-f6df5260b2bf\" (UID: \"45a6b14e-7a82-40de-8355-f6df5260b2bf\") " Jan 27 14:12:02 crc kubenswrapper[4914]: I0127 14:12:02.970911 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a6b14e-7a82-40de-8355-f6df5260b2bf-catalog-content\") pod \"45a6b14e-7a82-40de-8355-f6df5260b2bf\" (UID: \"45a6b14e-7a82-40de-8355-f6df5260b2bf\") " Jan 27 14:12:02 crc kubenswrapper[4914]: I0127 14:12:02.970946 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbfsr\" (UniqueName: \"kubernetes.io/projected/45a6b14e-7a82-40de-8355-f6df5260b2bf-kube-api-access-dbfsr\") pod \"45a6b14e-7a82-40de-8355-f6df5260b2bf\" (UID: \"45a6b14e-7a82-40de-8355-f6df5260b2bf\") " Jan 27 14:12:02 crc kubenswrapper[4914]: I0127 14:12:02.972276 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a6b14e-7a82-40de-8355-f6df5260b2bf-utilities" (OuterVolumeSpecName: "utilities") pod "45a6b14e-7a82-40de-8355-f6df5260b2bf" (UID: "45a6b14e-7a82-40de-8355-f6df5260b2bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:02 crc kubenswrapper[4914]: I0127 14:12:02.977202 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a6b14e-7a82-40de-8355-f6df5260b2bf-kube-api-access-dbfsr" (OuterVolumeSpecName: "kube-api-access-dbfsr") pod "45a6b14e-7a82-40de-8355-f6df5260b2bf" (UID: "45a6b14e-7a82-40de-8355-f6df5260b2bf"). InnerVolumeSpecName "kube-api-access-dbfsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.043999 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a6b14e-7a82-40de-8355-f6df5260b2bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45a6b14e-7a82-40de-8355-f6df5260b2bf" (UID: "45a6b14e-7a82-40de-8355-f6df5260b2bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.072960 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a6b14e-7a82-40de-8355-f6df5260b2bf-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.072998 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a6b14e-7a82-40de-8355-f6df5260b2bf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.073009 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbfsr\" (UniqueName: \"kubernetes.io/projected/45a6b14e-7a82-40de-8355-f6df5260b2bf-kube-api-access-dbfsr\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.374267 4914 generic.go:334] "Generic (PLEG): container finished" podID="45a6b14e-7a82-40de-8355-f6df5260b2bf" containerID="135c7ae2c473a8ad1a5a244d840b4734c8c19a1e1bfeb8d98464cd0b66d08225" exitCode=0 Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.374348 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-55v6c" event={"ID":"45a6b14e-7a82-40de-8355-f6df5260b2bf","Type":"ContainerDied","Data":"135c7ae2c473a8ad1a5a244d840b4734c8c19a1e1bfeb8d98464cd0b66d08225"} Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.374418 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-55v6c" Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.374439 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-55v6c" event={"ID":"45a6b14e-7a82-40de-8355-f6df5260b2bf","Type":"ContainerDied","Data":"893d1262fa8da4497c3588f5729f9c1a15182acd65c5d9b25f039396161a43d1"} Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.374474 4914 scope.go:117] "RemoveContainer" containerID="135c7ae2c473a8ad1a5a244d840b4734c8c19a1e1bfeb8d98464cd0b66d08225" Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.396720 4914 scope.go:117] "RemoveContainer" containerID="952da7292e485e14cfb341633518dcd772e4fc28b65f0aa6e6a7887f453223de" Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.415486 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-55v6c"] Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.423394 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-55v6c"] Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.435584 4914 scope.go:117] "RemoveContainer" containerID="870478afc53076737ace8295ee38ce1882c0c203a56289245e7ca7913521c439" Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.471855 4914 scope.go:117] "RemoveContainer" containerID="135c7ae2c473a8ad1a5a244d840b4734c8c19a1e1bfeb8d98464cd0b66d08225" Jan 27 14:12:03 crc kubenswrapper[4914]: E0127 14:12:03.472271 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"135c7ae2c473a8ad1a5a244d840b4734c8c19a1e1bfeb8d98464cd0b66d08225\": container with ID starting with 135c7ae2c473a8ad1a5a244d840b4734c8c19a1e1bfeb8d98464cd0b66d08225 not found: ID does not exist" containerID="135c7ae2c473a8ad1a5a244d840b4734c8c19a1e1bfeb8d98464cd0b66d08225" Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.472300 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"135c7ae2c473a8ad1a5a244d840b4734c8c19a1e1bfeb8d98464cd0b66d08225"} err="failed to get container status \"135c7ae2c473a8ad1a5a244d840b4734c8c19a1e1bfeb8d98464cd0b66d08225\": rpc error: code = NotFound desc = could not find container \"135c7ae2c473a8ad1a5a244d840b4734c8c19a1e1bfeb8d98464cd0b66d08225\": container with ID starting with 135c7ae2c473a8ad1a5a244d840b4734c8c19a1e1bfeb8d98464cd0b66d08225 not found: ID does not exist" Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.472321 4914 scope.go:117] "RemoveContainer" containerID="952da7292e485e14cfb341633518dcd772e4fc28b65f0aa6e6a7887f453223de" Jan 27 14:12:03 crc kubenswrapper[4914]: E0127 14:12:03.472501 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"952da7292e485e14cfb341633518dcd772e4fc28b65f0aa6e6a7887f453223de\": container with ID starting with 952da7292e485e14cfb341633518dcd772e4fc28b65f0aa6e6a7887f453223de not found: ID does not exist" containerID="952da7292e485e14cfb341633518dcd772e4fc28b65f0aa6e6a7887f453223de" Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.472521 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"952da7292e485e14cfb341633518dcd772e4fc28b65f0aa6e6a7887f453223de"} err="failed to get container status \"952da7292e485e14cfb341633518dcd772e4fc28b65f0aa6e6a7887f453223de\": rpc error: code = NotFound desc = could not find container \"952da7292e485e14cfb341633518dcd772e4fc28b65f0aa6e6a7887f453223de\": container with ID starting with 952da7292e485e14cfb341633518dcd772e4fc28b65f0aa6e6a7887f453223de not found: ID does not exist" Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.472537 4914 scope.go:117] "RemoveContainer" containerID="870478afc53076737ace8295ee38ce1882c0c203a56289245e7ca7913521c439" Jan 27 14:12:03 crc kubenswrapper[4914]: E0127 14:12:03.472717 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"870478afc53076737ace8295ee38ce1882c0c203a56289245e7ca7913521c439\": container with ID starting with 870478afc53076737ace8295ee38ce1882c0c203a56289245e7ca7913521c439 not found: ID does not exist" containerID="870478afc53076737ace8295ee38ce1882c0c203a56289245e7ca7913521c439" Jan 27 14:12:03 crc kubenswrapper[4914]: I0127 14:12:03.472735 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"870478afc53076737ace8295ee38ce1882c0c203a56289245e7ca7913521c439"} err="failed to get container status \"870478afc53076737ace8295ee38ce1882c0c203a56289245e7ca7913521c439\": rpc error: code = NotFound desc = could not find container \"870478afc53076737ace8295ee38ce1882c0c203a56289245e7ca7913521c439\": container with ID starting with 870478afc53076737ace8295ee38ce1882c0c203a56289245e7ca7913521c439 not found: ID does not exist" Jan 27 14:12:04 crc kubenswrapper[4914]: I0127 14:12:04.313645 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a6b14e-7a82-40de-8355-f6df5260b2bf" path="/var/lib/kubelet/pods/45a6b14e-7a82-40de-8355-f6df5260b2bf/volumes" Jan 27 14:12:08 crc kubenswrapper[4914]: I0127 14:12:08.294575 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:12:08 crc kubenswrapper[4914]: E0127 14:12:08.295351 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:12:13 crc kubenswrapper[4914]: I0127 14:12:13.038220 4914 scope.go:117] "RemoveContainer" containerID="1e43ccef9ed27cc09e85bb71803810d7f2a701d33b94b88b7c80b4666ba13daa" Jan 27 14:12:13 crc kubenswrapper[4914]: I0127 14:12:13.057503 4914 scope.go:117] "RemoveContainer" containerID="3f984ff47bc02ee131274ec13f98b917eed6d7c0984b5f4ad9ed23f20d0a300b" Jan 27 14:12:23 crc kubenswrapper[4914]: I0127 14:12:23.294936 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:12:23 crc kubenswrapper[4914]: E0127 14:12:23.296293 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:12:37 crc kubenswrapper[4914]: I0127 14:12:37.294579 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:12:37 crc kubenswrapper[4914]: E0127 14:12:37.295517 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:12:52 crc kubenswrapper[4914]: I0127 14:12:52.301673 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:12:52 crc kubenswrapper[4914]: E0127 14:12:52.302926 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:13:03 crc kubenswrapper[4914]: I0127 14:13:03.294577 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:13:03 crc kubenswrapper[4914]: E0127 14:13:03.295384 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:13:13 crc kubenswrapper[4914]: I0127 14:13:13.151342 4914 scope.go:117] "RemoveContainer" containerID="e20bc525535dca84894bea9c8166d5febea6348255cb0f9d2fb922214417f693" Jan 27 14:13:18 crc kubenswrapper[4914]: I0127 14:13:18.294501 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:13:18 crc kubenswrapper[4914]: E0127 14:13:18.295286 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:13:29 crc kubenswrapper[4914]: I0127 14:13:29.294905 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:13:29 crc kubenswrapper[4914]: E0127 14:13:29.296370 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:13:44 crc kubenswrapper[4914]: I0127 14:13:44.044641 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f1c7-account-create-update-2j2jk"] Jan 27 14:13:44 crc kubenswrapper[4914]: I0127 14:13:44.053063 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-nsvsv"] Jan 27 14:13:44 crc kubenswrapper[4914]: I0127 14:13:44.063052 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-f1c7-account-create-update-2j2jk"] Jan 27 14:13:44 crc kubenswrapper[4914]: I0127 14:13:44.071475 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-nsvsv"] Jan 27 14:13:44 crc kubenswrapper[4914]: I0127 14:13:44.294790 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:13:44 crc kubenswrapper[4914]: E0127 14:13:44.295342 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:13:44 crc kubenswrapper[4914]: I0127 14:13:44.305657 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f2122e6-777c-4be7-86ae-a16bd4255827" path="/var/lib/kubelet/pods/7f2122e6-777c-4be7-86ae-a16bd4255827/volumes" Jan 27 14:13:44 crc kubenswrapper[4914]: I0127 14:13:44.307216 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e88dce20-0af2-4799-8351-7ba637ee84c0" path="/var/lib/kubelet/pods/e88dce20-0af2-4799-8351-7ba637ee84c0/volumes" Jan 27 14:13:45 crc kubenswrapper[4914]: I0127 14:13:45.054985 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c405-account-create-update-mr8x8"] Jan 27 14:13:45 crc kubenswrapper[4914]: I0127 14:13:45.072546 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7459-account-create-update-wjgrv"] Jan 27 14:13:45 crc kubenswrapper[4914]: I0127 14:13:45.081416 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-hs9fd"] Jan 27 14:13:45 crc kubenswrapper[4914]: I0127 14:13:45.091931 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-cl47n"] Jan 27 14:13:45 crc kubenswrapper[4914]: I0127 14:13:45.100558 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-c405-account-create-update-mr8x8"] Jan 27 14:13:45 crc kubenswrapper[4914]: I0127 14:13:45.109159 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-cl47n"] Jan 27 14:13:45 crc kubenswrapper[4914]: I0127 14:13:45.117815 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7459-account-create-update-wjgrv"] Jan 27 14:13:45 crc kubenswrapper[4914]: I0127 14:13:45.125417 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-hs9fd"] Jan 27 14:13:46 crc kubenswrapper[4914]: I0127 14:13:46.311089 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27c36820-0762-4659-943c-2748a1bc3ca7" path="/var/lib/kubelet/pods/27c36820-0762-4659-943c-2748a1bc3ca7/volumes" Jan 27 14:13:46 crc kubenswrapper[4914]: I0127 14:13:46.311823 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6359c6f6-67f2-4b56-8ff9-57f336621b20" path="/var/lib/kubelet/pods/6359c6f6-67f2-4b56-8ff9-57f336621b20/volumes" Jan 27 14:13:46 crc kubenswrapper[4914]: I0127 14:13:46.312557 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0f14b57-bd10-4e82-ac68-aac2faa80f49" path="/var/lib/kubelet/pods/c0f14b57-bd10-4e82-ac68-aac2faa80f49/volumes" Jan 27 14:13:46 crc kubenswrapper[4914]: I0127 14:13:46.313276 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3e09913-1b2c-459a-a120-f0d61eedec2a" path="/var/lib/kubelet/pods/e3e09913-1b2c-459a-a120-f0d61eedec2a/volumes" Jan 27 14:13:57 crc kubenswrapper[4914]: I0127 14:13:57.294306 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:13:57 crc kubenswrapper[4914]: E0127 14:13:57.295254 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:14:06 crc kubenswrapper[4914]: I0127 14:14:06.068199 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mpmhb"] Jan 27 14:14:06 crc kubenswrapper[4914]: I0127 14:14:06.078311 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mpmhb"] Jan 27 14:14:06 crc kubenswrapper[4914]: I0127 14:14:06.387418 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90a2b976-d911-40b1-a016-7c8cf2df1a19" path="/var/lib/kubelet/pods/90a2b976-d911-40b1-a016-7c8cf2df1a19/volumes" Jan 27 14:14:12 crc kubenswrapper[4914]: I0127 14:14:12.301663 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:14:12 crc kubenswrapper[4914]: E0127 14:14:12.302565 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:14:13 crc kubenswrapper[4914]: I0127 14:14:13.053359 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-89vdt"] Jan 27 14:14:13 crc kubenswrapper[4914]: I0127 14:14:13.064974 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-89vdt"] Jan 27 14:14:13 crc kubenswrapper[4914]: I0127 14:14:13.204938 4914 scope.go:117] "RemoveContainer" containerID="15782267b14023839eaed6430aa044ff92a42e347c8d0020606c8069b4cd578e" Jan 27 14:14:13 crc kubenswrapper[4914]: I0127 14:14:13.249475 4914 scope.go:117] "RemoveContainer" containerID="25aff3a107df0be6e0642f5ee3e72920aadea0fcf62e8deb0f5c751a8e96e08b" Jan 27 14:14:13 crc kubenswrapper[4914]: I0127 14:14:13.392128 4914 scope.go:117] "RemoveContainer" containerID="e90da8565533a6bd73778722e664ba725c1eaf314a38012b0b941a6c1dd9d1fa" Jan 27 14:14:13 crc kubenswrapper[4914]: I0127 14:14:13.411678 4914 scope.go:117] "RemoveContainer" containerID="9682a771aa890361b063287af87e90505b32d5885983464b76c4471e0e8ddae1" Jan 27 14:14:13 crc kubenswrapper[4914]: I0127 14:14:13.456141 4914 scope.go:117] "RemoveContainer" containerID="0c93aeef90a993012d9bdc80253d53ca4088501f65a21bb65c2d958a9e1ae22b" Jan 27 14:14:13 crc kubenswrapper[4914]: I0127 14:14:13.495421 4914 scope.go:117] "RemoveContainer" containerID="858f489db88bdd88ea71a3ca3bcf8831b5db12046f431de1171778c788340ac1" Jan 27 14:14:13 crc kubenswrapper[4914]: I0127 14:14:13.541143 4914 scope.go:117] "RemoveContainer" containerID="5ceb4ed78648f4597d51777c757f02a0b246eed38d9872e9ed2ab3f53a2d196b" Jan 27 14:14:13 crc kubenswrapper[4914]: I0127 14:14:13.560904 4914 scope.go:117] "RemoveContainer" containerID="84267457ba98b000110e443198b980446fcd0e488ec44c5e1ced3720f4eb58ad" Jan 27 14:14:13 crc kubenswrapper[4914]: I0127 14:14:13.579430 4914 scope.go:117] "RemoveContainer" containerID="7825002cc51f2d69f0e68704b2045db7a0056d81de6eaea697547110ced248a5" Jan 27 14:14:14 crc kubenswrapper[4914]: I0127 14:14:14.311580 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54c4cebf-28fc-49cf-93e1-c10215cd7c85" path="/var/lib/kubelet/pods/54c4cebf-28fc-49cf-93e1-c10215cd7c85/volumes" Jan 27 14:14:22 crc kubenswrapper[4914]: I0127 14:14:22.040589 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-rq4q6"] Jan 27 14:14:22 crc kubenswrapper[4914]: I0127 14:14:22.051107 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-18f0-account-create-update-v84s5"] Jan 27 14:14:22 crc kubenswrapper[4914]: I0127 14:14:22.060698 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-rq4q6"] Jan 27 14:14:22 crc kubenswrapper[4914]: I0127 14:14:22.068480 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-18f0-account-create-update-v84s5"] Jan 27 14:14:22 crc kubenswrapper[4914]: I0127 14:14:22.311247 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593fca6b-0503-46d8-8b39-0b6fbf49c883" path="/var/lib/kubelet/pods/593fca6b-0503-46d8-8b39-0b6fbf49c883/volumes" Jan 27 14:14:22 crc kubenswrapper[4914]: I0127 14:14:22.312080 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94e73cf8-51af-4781-be3c-ef7490061629" path="/var/lib/kubelet/pods/94e73cf8-51af-4781-be3c-ef7490061629/volumes" Jan 27 14:14:24 crc kubenswrapper[4914]: I0127 14:14:24.294309 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:14:24 crc kubenswrapper[4914]: E0127 14:14:24.295008 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:14:25 crc kubenswrapper[4914]: I0127 14:14:25.034920 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a4a5-account-create-update-dzsqd"] Jan 27 14:14:25 crc kubenswrapper[4914]: I0127 14:14:25.045913 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-6bd6p"] Jan 27 14:14:25 crc kubenswrapper[4914]: I0127 14:14:25.059026 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-vgx5c"] Jan 27 14:14:25 crc kubenswrapper[4914]: I0127 14:14:25.067679 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a4a5-account-create-update-dzsqd"] Jan 27 14:14:25 crc kubenswrapper[4914]: I0127 14:14:25.078085 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-6bd6p"] Jan 27 14:14:25 crc kubenswrapper[4914]: I0127 14:14:25.088851 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-vgx5c"] Jan 27 14:14:26 crc kubenswrapper[4914]: I0127 14:14:26.031117 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2e5b-account-create-update-szp9d"] Jan 27 14:14:26 crc kubenswrapper[4914]: I0127 14:14:26.039341 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2e5b-account-create-update-szp9d"] Jan 27 14:14:26 crc kubenswrapper[4914]: I0127 14:14:26.309921 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a6452aa-f069-44f7-89ef-2766d721810d" path="/var/lib/kubelet/pods/1a6452aa-f069-44f7-89ef-2766d721810d/volumes" Jan 27 14:14:26 crc kubenswrapper[4914]: I0127 14:14:26.310708 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f" path="/var/lib/kubelet/pods/6b4ef1cb-6b1a-4d11-86b3-87e89e1b754f/volumes" Jan 27 14:14:26 crc kubenswrapper[4914]: I0127 14:14:26.311463 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="722d139b-6c73-46cf-918b-6eec6bcee414" path="/var/lib/kubelet/pods/722d139b-6c73-46cf-918b-6eec6bcee414/volumes" Jan 27 14:14:26 crc kubenswrapper[4914]: I0127 14:14:26.312225 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eec0612-69ee-4cf2-aa84-c08891b33e53" path="/var/lib/kubelet/pods/9eec0612-69ee-4cf2-aa84-c08891b33e53/volumes" Jan 27 14:14:30 crc kubenswrapper[4914]: I0127 14:14:30.030816 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-5bzqr"] Jan 27 14:14:30 crc kubenswrapper[4914]: I0127 14:14:30.038759 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-5bzqr"] Jan 27 14:14:30 crc kubenswrapper[4914]: I0127 14:14:30.305613 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="259bfb44-0b45-476a-901a-e70c6b05a0e4" path="/var/lib/kubelet/pods/259bfb44-0b45-476a-901a-e70c6b05a0e4/volumes" Jan 27 14:14:39 crc kubenswrapper[4914]: I0127 14:14:39.294246 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:14:39 crc kubenswrapper[4914]: E0127 14:14:39.294916 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:14:50 crc kubenswrapper[4914]: I0127 14:14:50.294469 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:14:50 crc kubenswrapper[4914]: E0127 14:14:50.295311 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.156988 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m"] Jan 27 14:15:00 crc kubenswrapper[4914]: E0127 14:15:00.157859 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a6b14e-7a82-40de-8355-f6df5260b2bf" containerName="registry-server" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.157876 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a6b14e-7a82-40de-8355-f6df5260b2bf" containerName="registry-server" Jan 27 14:15:00 crc kubenswrapper[4914]: E0127 14:15:00.157891 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60208338-faea-48a5-a5e9-2b14616802c1" containerName="extract-utilities" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.157898 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="60208338-faea-48a5-a5e9-2b14616802c1" containerName="extract-utilities" Jan 27 14:15:00 crc kubenswrapper[4914]: E0127 14:15:00.157911 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a6b14e-7a82-40de-8355-f6df5260b2bf" containerName="extract-content" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.157917 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a6b14e-7a82-40de-8355-f6df5260b2bf" containerName="extract-content" Jan 27 14:15:00 crc kubenswrapper[4914]: E0127 14:15:00.157932 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60208338-faea-48a5-a5e9-2b14616802c1" containerName="extract-content" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.157938 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="60208338-faea-48a5-a5e9-2b14616802c1" containerName="extract-content" Jan 27 14:15:00 crc kubenswrapper[4914]: E0127 14:15:00.157949 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a6b14e-7a82-40de-8355-f6df5260b2bf" containerName="extract-utilities" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.157955 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a6b14e-7a82-40de-8355-f6df5260b2bf" containerName="extract-utilities" Jan 27 14:15:00 crc kubenswrapper[4914]: E0127 14:15:00.157971 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60208338-faea-48a5-a5e9-2b14616802c1" containerName="registry-server" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.157978 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="60208338-faea-48a5-a5e9-2b14616802c1" containerName="registry-server" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.158168 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="60208338-faea-48a5-a5e9-2b14616802c1" containerName="registry-server" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.158195 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a6b14e-7a82-40de-8355-f6df5260b2bf" containerName="registry-server" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.158794 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.160920 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.161121 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.172664 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m"] Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.273040 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf309a38-5277-430c-afa8-c4f31c8158bd-config-volume\") pod \"collect-profiles-29492055-rlt2m\" (UID: \"cf309a38-5277-430c-afa8-c4f31c8158bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.273125 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5lzl\" (UniqueName: \"kubernetes.io/projected/cf309a38-5277-430c-afa8-c4f31c8158bd-kube-api-access-z5lzl\") pod \"collect-profiles-29492055-rlt2m\" (UID: \"cf309a38-5277-430c-afa8-c4f31c8158bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.273211 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf309a38-5277-430c-afa8-c4f31c8158bd-secret-volume\") pod \"collect-profiles-29492055-rlt2m\" (UID: \"cf309a38-5277-430c-afa8-c4f31c8158bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.375316 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf309a38-5277-430c-afa8-c4f31c8158bd-config-volume\") pod \"collect-profiles-29492055-rlt2m\" (UID: \"cf309a38-5277-430c-afa8-c4f31c8158bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.375377 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5lzl\" (UniqueName: \"kubernetes.io/projected/cf309a38-5277-430c-afa8-c4f31c8158bd-kube-api-access-z5lzl\") pod \"collect-profiles-29492055-rlt2m\" (UID: \"cf309a38-5277-430c-afa8-c4f31c8158bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.375465 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf309a38-5277-430c-afa8-c4f31c8158bd-secret-volume\") pod \"collect-profiles-29492055-rlt2m\" (UID: \"cf309a38-5277-430c-afa8-c4f31c8158bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.376494 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf309a38-5277-430c-afa8-c4f31c8158bd-config-volume\") pod \"collect-profiles-29492055-rlt2m\" (UID: \"cf309a38-5277-430c-afa8-c4f31c8158bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.381273 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf309a38-5277-430c-afa8-c4f31c8158bd-secret-volume\") pod \"collect-profiles-29492055-rlt2m\" (UID: \"cf309a38-5277-430c-afa8-c4f31c8158bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.400494 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5lzl\" (UniqueName: \"kubernetes.io/projected/cf309a38-5277-430c-afa8-c4f31c8158bd-kube-api-access-z5lzl\") pod \"collect-profiles-29492055-rlt2m\" (UID: \"cf309a38-5277-430c-afa8-c4f31c8158bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.476691 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" Jan 27 14:15:00 crc kubenswrapper[4914]: I0127 14:15:00.928406 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m"] Jan 27 14:15:01 crc kubenswrapper[4914]: I0127 14:15:01.383401 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" event={"ID":"cf309a38-5277-430c-afa8-c4f31c8158bd","Type":"ContainerStarted","Data":"0f62bba8fba1ced47c48750e3184f1bf806801fddcaa28fd450590da1f5e2e45"} Jan 27 14:15:01 crc kubenswrapper[4914]: I0127 14:15:01.383443 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" event={"ID":"cf309a38-5277-430c-afa8-c4f31c8158bd","Type":"ContainerStarted","Data":"c3f5c7b468ddb6a2f495bec7d1fb88944f2bc2d06a898ded341cabad81d0b36e"} Jan 27 14:15:01 crc kubenswrapper[4914]: I0127 14:15:01.408178 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" podStartSLOduration=1.4081565440000001 podStartE2EDuration="1.408156544s" podCreationTimestamp="2026-01-27 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:15:01.399620138 +0000 UTC m=+1859.711970223" watchObservedRunningTime="2026-01-27 14:15:01.408156544 +0000 UTC m=+1859.720506629" Jan 27 14:15:02 crc kubenswrapper[4914]: I0127 14:15:02.391557 4914 generic.go:334] "Generic (PLEG): container finished" podID="cf309a38-5277-430c-afa8-c4f31c8158bd" containerID="0f62bba8fba1ced47c48750e3184f1bf806801fddcaa28fd450590da1f5e2e45" exitCode=0 Jan 27 14:15:02 crc kubenswrapper[4914]: I0127 14:15:02.391600 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" event={"ID":"cf309a38-5277-430c-afa8-c4f31c8158bd","Type":"ContainerDied","Data":"0f62bba8fba1ced47c48750e3184f1bf806801fddcaa28fd450590da1f5e2e45"} Jan 27 14:15:03 crc kubenswrapper[4914]: I0127 14:15:03.715578 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" Jan 27 14:15:03 crc kubenswrapper[4914]: I0127 14:15:03.741104 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5lzl\" (UniqueName: \"kubernetes.io/projected/cf309a38-5277-430c-afa8-c4f31c8158bd-kube-api-access-z5lzl\") pod \"cf309a38-5277-430c-afa8-c4f31c8158bd\" (UID: \"cf309a38-5277-430c-afa8-c4f31c8158bd\") " Jan 27 14:15:03 crc kubenswrapper[4914]: I0127 14:15:03.741177 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf309a38-5277-430c-afa8-c4f31c8158bd-config-volume\") pod \"cf309a38-5277-430c-afa8-c4f31c8158bd\" (UID: \"cf309a38-5277-430c-afa8-c4f31c8158bd\") " Jan 27 14:15:03 crc kubenswrapper[4914]: I0127 14:15:03.741215 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf309a38-5277-430c-afa8-c4f31c8158bd-secret-volume\") pod \"cf309a38-5277-430c-afa8-c4f31c8158bd\" (UID: \"cf309a38-5277-430c-afa8-c4f31c8158bd\") " Jan 27 14:15:03 crc kubenswrapper[4914]: I0127 14:15:03.742764 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf309a38-5277-430c-afa8-c4f31c8158bd-config-volume" (OuterVolumeSpecName: "config-volume") pod "cf309a38-5277-430c-afa8-c4f31c8158bd" (UID: "cf309a38-5277-430c-afa8-c4f31c8158bd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:03 crc kubenswrapper[4914]: I0127 14:15:03.747767 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf309a38-5277-430c-afa8-c4f31c8158bd-kube-api-access-z5lzl" (OuterVolumeSpecName: "kube-api-access-z5lzl") pod "cf309a38-5277-430c-afa8-c4f31c8158bd" (UID: "cf309a38-5277-430c-afa8-c4f31c8158bd"). InnerVolumeSpecName "kube-api-access-z5lzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:15:03 crc kubenswrapper[4914]: I0127 14:15:03.748997 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf309a38-5277-430c-afa8-c4f31c8158bd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cf309a38-5277-430c-afa8-c4f31c8158bd" (UID: "cf309a38-5277-430c-afa8-c4f31c8158bd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:15:03 crc kubenswrapper[4914]: I0127 14:15:03.842996 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5lzl\" (UniqueName: \"kubernetes.io/projected/cf309a38-5277-430c-afa8-c4f31c8158bd-kube-api-access-z5lzl\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:03 crc kubenswrapper[4914]: I0127 14:15:03.843033 4914 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf309a38-5277-430c-afa8-c4f31c8158bd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:03 crc kubenswrapper[4914]: I0127 14:15:03.843044 4914 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf309a38-5277-430c-afa8-c4f31c8158bd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:04 crc kubenswrapper[4914]: I0127 14:15:04.409630 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" event={"ID":"cf309a38-5277-430c-afa8-c4f31c8158bd","Type":"ContainerDied","Data":"c3f5c7b468ddb6a2f495bec7d1fb88944f2bc2d06a898ded341cabad81d0b36e"} Jan 27 14:15:04 crc kubenswrapper[4914]: I0127 14:15:04.409663 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-rlt2m" Jan 27 14:15:04 crc kubenswrapper[4914]: I0127 14:15:04.409685 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3f5c7b468ddb6a2f495bec7d1fb88944f2bc2d06a898ded341cabad81d0b36e" Jan 27 14:15:05 crc kubenswrapper[4914]: I0127 14:15:05.294754 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:15:05 crc kubenswrapper[4914]: E0127 14:15:05.295324 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:15:06 crc kubenswrapper[4914]: I0127 14:15:06.430633 4914 generic.go:334] "Generic (PLEG): container finished" podID="77a76dd2-a27e-4755-881f-3472edf77cd6" containerID="4e023a5d32549e912230382fb121126b075abea26901bf9595192ce3cd5b4928" exitCode=0 Jan 27 14:15:06 crc kubenswrapper[4914]: I0127 14:15:06.430680 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" event={"ID":"77a76dd2-a27e-4755-881f-3472edf77cd6","Type":"ContainerDied","Data":"4e023a5d32549e912230382fb121126b075abea26901bf9595192ce3cd5b4928"} Jan 27 14:15:07 crc kubenswrapper[4914]: I0127 14:15:07.878383 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" Jan 27 14:15:07 crc kubenswrapper[4914]: I0127 14:15:07.921632 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgqr9\" (UniqueName: \"kubernetes.io/projected/77a76dd2-a27e-4755-881f-3472edf77cd6-kube-api-access-tgqr9\") pod \"77a76dd2-a27e-4755-881f-3472edf77cd6\" (UID: \"77a76dd2-a27e-4755-881f-3472edf77cd6\") " Jan 27 14:15:07 crc kubenswrapper[4914]: I0127 14:15:07.921820 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-inventory\") pod \"77a76dd2-a27e-4755-881f-3472edf77cd6\" (UID: \"77a76dd2-a27e-4755-881f-3472edf77cd6\") " Jan 27 14:15:07 crc kubenswrapper[4914]: I0127 14:15:07.921990 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-bootstrap-combined-ca-bundle\") pod \"77a76dd2-a27e-4755-881f-3472edf77cd6\" (UID: \"77a76dd2-a27e-4755-881f-3472edf77cd6\") " Jan 27 14:15:07 crc kubenswrapper[4914]: I0127 14:15:07.922093 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-ssh-key-openstack-edpm-ipam\") pod \"77a76dd2-a27e-4755-881f-3472edf77cd6\" (UID: \"77a76dd2-a27e-4755-881f-3472edf77cd6\") " Jan 27 14:15:07 crc kubenswrapper[4914]: I0127 14:15:07.933876 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "77a76dd2-a27e-4755-881f-3472edf77cd6" (UID: "77a76dd2-a27e-4755-881f-3472edf77cd6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:15:07 crc kubenswrapper[4914]: I0127 14:15:07.933995 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77a76dd2-a27e-4755-881f-3472edf77cd6-kube-api-access-tgqr9" (OuterVolumeSpecName: "kube-api-access-tgqr9") pod "77a76dd2-a27e-4755-881f-3472edf77cd6" (UID: "77a76dd2-a27e-4755-881f-3472edf77cd6"). InnerVolumeSpecName "kube-api-access-tgqr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:15:07 crc kubenswrapper[4914]: I0127 14:15:07.951802 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-inventory" (OuterVolumeSpecName: "inventory") pod "77a76dd2-a27e-4755-881f-3472edf77cd6" (UID: "77a76dd2-a27e-4755-881f-3472edf77cd6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:15:07 crc kubenswrapper[4914]: I0127 14:15:07.956079 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "77a76dd2-a27e-4755-881f-3472edf77cd6" (UID: "77a76dd2-a27e-4755-881f-3472edf77cd6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.025294 4914 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.025358 4914 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.025378 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/77a76dd2-a27e-4755-881f-3472edf77cd6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.025389 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgqr9\" (UniqueName: \"kubernetes.io/projected/77a76dd2-a27e-4755-881f-3472edf77cd6-kube-api-access-tgqr9\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.451541 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" event={"ID":"77a76dd2-a27e-4755-881f-3472edf77cd6","Type":"ContainerDied","Data":"030da0036aa753d6bab4cac55a9f4e35b09a8474271b152820593cb816f990cf"} Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.451588 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="030da0036aa753d6bab4cac55a9f4e35b09a8474271b152820593cb816f990cf" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.451658 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.520798 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h"] Jan 27 14:15:08 crc kubenswrapper[4914]: E0127 14:15:08.521281 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf309a38-5277-430c-afa8-c4f31c8158bd" containerName="collect-profiles" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.521304 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf309a38-5277-430c-afa8-c4f31c8158bd" containerName="collect-profiles" Jan 27 14:15:08 crc kubenswrapper[4914]: E0127 14:15:08.521335 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77a76dd2-a27e-4755-881f-3472edf77cd6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.521346 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="77a76dd2-a27e-4755-881f-3472edf77cd6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.521558 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="77a76dd2-a27e-4755-881f-3472edf77cd6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.521590 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf309a38-5277-430c-afa8-c4f31c8158bd" containerName="collect-profiles" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.522383 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.524357 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.525062 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.525561 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.525823 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m5jxs" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.533070 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h"] Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.635577 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdc53bfd-51de-436e-837e-bfc1186f706f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h\" (UID: \"bdc53bfd-51de-436e-837e-bfc1186f706f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.635786 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25p44\" (UniqueName: \"kubernetes.io/projected/bdc53bfd-51de-436e-837e-bfc1186f706f-kube-api-access-25p44\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h\" (UID: \"bdc53bfd-51de-436e-837e-bfc1186f706f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.636220 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdc53bfd-51de-436e-837e-bfc1186f706f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h\" (UID: \"bdc53bfd-51de-436e-837e-bfc1186f706f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.738240 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdc53bfd-51de-436e-837e-bfc1186f706f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h\" (UID: \"bdc53bfd-51de-436e-837e-bfc1186f706f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.738359 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25p44\" (UniqueName: \"kubernetes.io/projected/bdc53bfd-51de-436e-837e-bfc1186f706f-kube-api-access-25p44\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h\" (UID: \"bdc53bfd-51de-436e-837e-bfc1186f706f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.738504 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdc53bfd-51de-436e-837e-bfc1186f706f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h\" (UID: \"bdc53bfd-51de-436e-837e-bfc1186f706f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.743616 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdc53bfd-51de-436e-837e-bfc1186f706f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h\" (UID: \"bdc53bfd-51de-436e-837e-bfc1186f706f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.745245 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdc53bfd-51de-436e-837e-bfc1186f706f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h\" (UID: \"bdc53bfd-51de-436e-837e-bfc1186f706f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.761332 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25p44\" (UniqueName: \"kubernetes.io/projected/bdc53bfd-51de-436e-837e-bfc1186f706f-kube-api-access-25p44\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h\" (UID: \"bdc53bfd-51de-436e-837e-bfc1186f706f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" Jan 27 14:15:08 crc kubenswrapper[4914]: I0127 14:15:08.841243 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" Jan 27 14:15:09 crc kubenswrapper[4914]: I0127 14:15:09.320137 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h"] Jan 27 14:15:09 crc kubenswrapper[4914]: I0127 14:15:09.332143 4914 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:15:09 crc kubenswrapper[4914]: I0127 14:15:09.488458 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" event={"ID":"bdc53bfd-51de-436e-837e-bfc1186f706f","Type":"ContainerStarted","Data":"c607bb08ad5e1d89753333dd0ab8a779b6be565a83502e0aa4daad9541ffe12b"} Jan 27 14:15:10 crc kubenswrapper[4914]: I0127 14:15:10.498265 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" event={"ID":"bdc53bfd-51de-436e-837e-bfc1186f706f","Type":"ContainerStarted","Data":"64948298afb33aae288f8d38b3cb0832ec8830829ec9411502521dc4d8027337"} Jan 27 14:15:13 crc kubenswrapper[4914]: I0127 14:15:13.766807 4914 scope.go:117] "RemoveContainer" containerID="e43478a5fba67162a32a4753bf8dd61deb2192806ae6302438aff3f8bfde3711" Jan 27 14:15:13 crc kubenswrapper[4914]: I0127 14:15:13.799538 4914 scope.go:117] "RemoveContainer" containerID="180c570bcb03998aab3c2ebc8726c7b0369cbc2f3a9166fc85060615ee89bda2" Jan 27 14:15:13 crc kubenswrapper[4914]: I0127 14:15:13.855075 4914 scope.go:117] "RemoveContainer" containerID="6bb3d784aed651d5b2a423ec61dd80da1e5a52eb1bbd6a800fc03faec3c1e21c" Jan 27 14:15:13 crc kubenswrapper[4914]: I0127 14:15:13.897960 4914 scope.go:117] "RemoveContainer" containerID="d5a34caa13028e6e9bb316d66ab460490ec3ebcc040d1dae69e12cbb5adaedfd" Jan 27 14:15:13 crc kubenswrapper[4914]: I0127 14:15:13.939705 4914 scope.go:117] "RemoveContainer" containerID="416641a2fe05bb1857e3128a835c1308f47844ec76d2531de4f6c1463e72486f" Jan 27 14:15:13 crc kubenswrapper[4914]: I0127 14:15:13.979323 4914 scope.go:117] "RemoveContainer" containerID="15f004f3ed47379357a42bf0e685c29cb5d444b5b17798ee0c3b61450a73cb5e" Jan 27 14:15:14 crc kubenswrapper[4914]: I0127 14:15:14.032001 4914 scope.go:117] "RemoveContainer" containerID="6a963f30042c01e81036d5aa1e25496194c16fa3f6cf98ff318ece876145b2ed" Jan 27 14:15:14 crc kubenswrapper[4914]: I0127 14:15:14.049744 4914 scope.go:117] "RemoveContainer" containerID="714faa35d5428cbad8657544f958e9111b3f43151b4fc41bafde03b3931c836d" Jan 27 14:15:17 crc kubenswrapper[4914]: I0127 14:15:17.294761 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:15:17 crc kubenswrapper[4914]: I0127 14:15:17.556029 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerStarted","Data":"f659cd5c9a3ab8758da4b24efeed5972f9d7d7fb86a73f395650bf561d77e063"} Jan 27 14:15:17 crc kubenswrapper[4914]: I0127 14:15:17.571591 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" podStartSLOduration=8.822728382 podStartE2EDuration="9.571572365s" podCreationTimestamp="2026-01-27 14:15:08 +0000 UTC" firstStartedPulling="2026-01-27 14:15:09.331636606 +0000 UTC m=+1867.643986691" lastFinishedPulling="2026-01-27 14:15:10.080480579 +0000 UTC m=+1868.392830674" observedRunningTime="2026-01-27 14:15:11.524554857 +0000 UTC m=+1869.836904942" watchObservedRunningTime="2026-01-27 14:15:17.571572365 +0000 UTC m=+1875.883922470" Jan 27 14:15:20 crc kubenswrapper[4914]: I0127 14:15:20.063120 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-cwqrf"] Jan 27 14:15:20 crc kubenswrapper[4914]: I0127 14:15:20.070823 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2h5wx"] Jan 27 14:15:20 crc kubenswrapper[4914]: I0127 14:15:20.078702 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-g8rk8"] Jan 27 14:15:20 crc kubenswrapper[4914]: I0127 14:15:20.085529 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-cwqrf"] Jan 27 14:15:20 crc kubenswrapper[4914]: I0127 14:15:20.092632 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-g8rk8"] Jan 27 14:15:20 crc kubenswrapper[4914]: I0127 14:15:20.101922 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2h5wx"] Jan 27 14:15:20 crc kubenswrapper[4914]: I0127 14:15:20.303798 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07d55233-43ac-42a0-b604-e38f7bafa346" path="/var/lib/kubelet/pods/07d55233-43ac-42a0-b604-e38f7bafa346/volumes" Jan 27 14:15:20 crc kubenswrapper[4914]: I0127 14:15:20.304726 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="505474ad-b983-4001-b8b6-f55b1d077e08" path="/var/lib/kubelet/pods/505474ad-b983-4001-b8b6-f55b1d077e08/volumes" Jan 27 14:15:20 crc kubenswrapper[4914]: I0127 14:15:20.305313 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc571d78-a30b-48ae-9687-31f5b6826a12" path="/var/lib/kubelet/pods/fc571d78-a30b-48ae-9687-31f5b6826a12/volumes" Jan 27 14:15:26 crc kubenswrapper[4914]: I0127 14:15:26.042048 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-t8d7p"] Jan 27 14:15:26 crc kubenswrapper[4914]: I0127 14:15:26.056492 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-t8d7p"] Jan 27 14:15:26 crc kubenswrapper[4914]: I0127 14:15:26.321155 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1" path="/var/lib/kubelet/pods/37d5e7c9-0a2b-4d9f-ace2-7d5b9e5849b1/volumes" Jan 27 14:15:39 crc kubenswrapper[4914]: I0127 14:15:39.027517 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-8d56k"] Jan 27 14:15:39 crc kubenswrapper[4914]: I0127 14:15:39.034579 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-8d56k"] Jan 27 14:15:40 crc kubenswrapper[4914]: I0127 14:15:40.304223 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="131bae56-5108-4750-8056-68133598a109" path="/var/lib/kubelet/pods/131bae56-5108-4750-8056-68133598a109/volumes" Jan 27 14:16:14 crc kubenswrapper[4914]: I0127 14:16:14.218626 4914 scope.go:117] "RemoveContainer" containerID="e7a8ae2e99521855c053a5fa9a098d033e89dffc625ece54b09edfc02ba894b3" Jan 27 14:16:14 crc kubenswrapper[4914]: I0127 14:16:14.262209 4914 scope.go:117] "RemoveContainer" containerID="2713ae95d90a75a72e0a7e3e2a0de1f5cdaffa265c5206f5b309c3513aec4ea2" Jan 27 14:16:14 crc kubenswrapper[4914]: I0127 14:16:14.305954 4914 scope.go:117] "RemoveContainer" containerID="b306d76bfcf734071304e1e40ca59c0382f8e48e49c46e4ac748b0548454c979" Jan 27 14:16:14 crc kubenswrapper[4914]: I0127 14:16:14.354960 4914 scope.go:117] "RemoveContainer" containerID="cebd09a8c949395b3a9f23c0b61f4017ca5fbe61ee1d3120edfef929567ede6e" Jan 27 14:16:14 crc kubenswrapper[4914]: I0127 14:16:14.414056 4914 scope.go:117] "RemoveContainer" containerID="efd560b80c2ce3d88f9e0184068e891a8bcf908d97452d5763ddbf26da8c3fd2" Jan 27 14:16:21 crc kubenswrapper[4914]: I0127 14:16:21.051470 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-ncrjl"] Jan 27 14:16:21 crc kubenswrapper[4914]: I0127 14:16:21.064051 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-6020-account-create-update-p7x99"] Jan 27 14:16:21 crc kubenswrapper[4914]: I0127 14:16:21.078654 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9dea-account-create-update-phmgm"] Jan 27 14:16:21 crc kubenswrapper[4914]: I0127 14:16:21.086276 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-4zv8f"] Jan 27 14:16:21 crc kubenswrapper[4914]: I0127 14:16:21.095253 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3d52-account-create-update-9v8gv"] Jan 27 14:16:21 crc kubenswrapper[4914]: I0127 14:16:21.102464 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9dea-account-create-update-phmgm"] Jan 27 14:16:21 crc kubenswrapper[4914]: I0127 14:16:21.109388 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-smbvn"] Jan 27 14:16:21 crc kubenswrapper[4914]: I0127 14:16:21.116748 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-ncrjl"] Jan 27 14:16:21 crc kubenswrapper[4914]: I0127 14:16:21.124392 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-4zv8f"] Jan 27 14:16:21 crc kubenswrapper[4914]: I0127 14:16:21.131648 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3d52-account-create-update-9v8gv"] Jan 27 14:16:21 crc kubenswrapper[4914]: I0127 14:16:21.139328 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-smbvn"] Jan 27 14:16:21 crc kubenswrapper[4914]: I0127 14:16:21.146050 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-6020-account-create-update-p7x99"] Jan 27 14:16:22 crc kubenswrapper[4914]: I0127 14:16:22.304252 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30de17e6-b0bd-4549-b794-052a3b6c9d84" path="/var/lib/kubelet/pods/30de17e6-b0bd-4549-b794-052a3b6c9d84/volumes" Jan 27 14:16:22 crc kubenswrapper[4914]: I0127 14:16:22.305628 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d6bf34b-de01-4687-8288-6c652539bbd2" path="/var/lib/kubelet/pods/8d6bf34b-de01-4687-8288-6c652539bbd2/volumes" Jan 27 14:16:22 crc kubenswrapper[4914]: I0127 14:16:22.306524 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dc9fc14-27f3-42dd-b037-39b461aa19f1" path="/var/lib/kubelet/pods/8dc9fc14-27f3-42dd-b037-39b461aa19f1/volumes" Jan 27 14:16:22 crc kubenswrapper[4914]: I0127 14:16:22.307427 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f745b4c-d50b-4e67-902d-ea60fedda7dc" path="/var/lib/kubelet/pods/9f745b4c-d50b-4e67-902d-ea60fedda7dc/volumes" Jan 27 14:16:22 crc kubenswrapper[4914]: I0127 14:16:22.309069 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9c87355-3782-4f33-8e73-14293d16499d" path="/var/lib/kubelet/pods/a9c87355-3782-4f33-8e73-14293d16499d/volumes" Jan 27 14:16:22 crc kubenswrapper[4914]: I0127 14:16:22.309868 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8921e83-a3ec-4c05-9501-47e07d28a3ac" path="/var/lib/kubelet/pods/d8921e83-a3ec-4c05-9501-47e07d28a3ac/volumes" Jan 27 14:16:45 crc kubenswrapper[4914]: I0127 14:16:45.886331 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tjprh"] Jan 27 14:16:45 crc kubenswrapper[4914]: I0127 14:16:45.888566 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:16:45 crc kubenswrapper[4914]: I0127 14:16:45.908652 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tjprh"] Jan 27 14:16:46 crc kubenswrapper[4914]: I0127 14:16:46.042273 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26b425ae-cbd3-4e25-becc-0a4c638599b2-utilities\") pod \"certified-operators-tjprh\" (UID: \"26b425ae-cbd3-4e25-becc-0a4c638599b2\") " pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:16:46 crc kubenswrapper[4914]: I0127 14:16:46.042545 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26b425ae-cbd3-4e25-becc-0a4c638599b2-catalog-content\") pod \"certified-operators-tjprh\" (UID: \"26b425ae-cbd3-4e25-becc-0a4c638599b2\") " pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:16:46 crc kubenswrapper[4914]: I0127 14:16:46.042621 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4plm\" (UniqueName: \"kubernetes.io/projected/26b425ae-cbd3-4e25-becc-0a4c638599b2-kube-api-access-s4plm\") pod \"certified-operators-tjprh\" (UID: \"26b425ae-cbd3-4e25-becc-0a4c638599b2\") " pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:16:46 crc kubenswrapper[4914]: I0127 14:16:46.144443 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26b425ae-cbd3-4e25-becc-0a4c638599b2-catalog-content\") pod \"certified-operators-tjprh\" (UID: \"26b425ae-cbd3-4e25-becc-0a4c638599b2\") " pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:16:46 crc kubenswrapper[4914]: I0127 14:16:46.144510 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4plm\" (UniqueName: \"kubernetes.io/projected/26b425ae-cbd3-4e25-becc-0a4c638599b2-kube-api-access-s4plm\") pod \"certified-operators-tjprh\" (UID: \"26b425ae-cbd3-4e25-becc-0a4c638599b2\") " pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:16:46 crc kubenswrapper[4914]: I0127 14:16:46.144725 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26b425ae-cbd3-4e25-becc-0a4c638599b2-utilities\") pod \"certified-operators-tjprh\" (UID: \"26b425ae-cbd3-4e25-becc-0a4c638599b2\") " pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:16:46 crc kubenswrapper[4914]: I0127 14:16:46.145819 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26b425ae-cbd3-4e25-becc-0a4c638599b2-utilities\") pod \"certified-operators-tjprh\" (UID: \"26b425ae-cbd3-4e25-becc-0a4c638599b2\") " pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:16:46 crc kubenswrapper[4914]: I0127 14:16:46.145909 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26b425ae-cbd3-4e25-becc-0a4c638599b2-catalog-content\") pod \"certified-operators-tjprh\" (UID: \"26b425ae-cbd3-4e25-becc-0a4c638599b2\") " pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:16:46 crc kubenswrapper[4914]: I0127 14:16:46.177805 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4plm\" (UniqueName: \"kubernetes.io/projected/26b425ae-cbd3-4e25-becc-0a4c638599b2-kube-api-access-s4plm\") pod \"certified-operators-tjprh\" (UID: \"26b425ae-cbd3-4e25-becc-0a4c638599b2\") " pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:16:46 crc kubenswrapper[4914]: I0127 14:16:46.248577 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:16:46 crc kubenswrapper[4914]: I0127 14:16:46.776086 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tjprh"] Jan 27 14:16:47 crc kubenswrapper[4914]: I0127 14:16:47.373073 4914 generic.go:334] "Generic (PLEG): container finished" podID="26b425ae-cbd3-4e25-becc-0a4c638599b2" containerID="34910911cb364e40b5e668a554b804d78e613644fa00c9b6b6665f801447bc67" exitCode=0 Jan 27 14:16:47 crc kubenswrapper[4914]: I0127 14:16:47.373129 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjprh" event={"ID":"26b425ae-cbd3-4e25-becc-0a4c638599b2","Type":"ContainerDied","Data":"34910911cb364e40b5e668a554b804d78e613644fa00c9b6b6665f801447bc67"} Jan 27 14:16:47 crc kubenswrapper[4914]: I0127 14:16:47.373186 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjprh" event={"ID":"26b425ae-cbd3-4e25-becc-0a4c638599b2","Type":"ContainerStarted","Data":"55aa0a15321b1c1ed0d6c47317d0389f05bc42d004c38a9db8b507a915f97d9d"} Jan 27 14:16:54 crc kubenswrapper[4914]: I0127 14:16:54.435211 4914 generic.go:334] "Generic (PLEG): container finished" podID="26b425ae-cbd3-4e25-becc-0a4c638599b2" containerID="1863fd41355c2888682e458871f2d3f04282770b10ffabf1a7f1bfa5ee099087" exitCode=0 Jan 27 14:16:54 crc kubenswrapper[4914]: I0127 14:16:54.435283 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjprh" event={"ID":"26b425ae-cbd3-4e25-becc-0a4c638599b2","Type":"ContainerDied","Data":"1863fd41355c2888682e458871f2d3f04282770b10ffabf1a7f1bfa5ee099087"} Jan 27 14:16:55 crc kubenswrapper[4914]: I0127 14:16:55.446539 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjprh" event={"ID":"26b425ae-cbd3-4e25-becc-0a4c638599b2","Type":"ContainerStarted","Data":"7bff98b604a5a61cee0639a13aec272e23607b7ba10e15b911df26498e45928c"} Jan 27 14:16:55 crc kubenswrapper[4914]: I0127 14:16:55.466858 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tjprh" podStartSLOduration=2.9415532779999998 podStartE2EDuration="10.466821555s" podCreationTimestamp="2026-01-27 14:16:45 +0000 UTC" firstStartedPulling="2026-01-27 14:16:47.376025674 +0000 UTC m=+1965.688375769" lastFinishedPulling="2026-01-27 14:16:54.901293961 +0000 UTC m=+1973.213644046" observedRunningTime="2026-01-27 14:16:55.462300061 +0000 UTC m=+1973.774650166" watchObservedRunningTime="2026-01-27 14:16:55.466821555 +0000 UTC m=+1973.779171630" Jan 27 14:16:56 crc kubenswrapper[4914]: I0127 14:16:56.249804 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:16:56 crc kubenswrapper[4914]: I0127 14:16:56.249920 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:16:57 crc kubenswrapper[4914]: I0127 14:16:57.299242 4914 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tjprh" podUID="26b425ae-cbd3-4e25-becc-0a4c638599b2" containerName="registry-server" probeResult="failure" output=< Jan 27 14:16:57 crc kubenswrapper[4914]: timeout: failed to connect service ":50051" within 1s Jan 27 14:16:57 crc kubenswrapper[4914]: > Jan 27 14:17:06 crc kubenswrapper[4914]: I0127 14:17:06.306007 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:17:06 crc kubenswrapper[4914]: I0127 14:17:06.373736 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:17:06 crc kubenswrapper[4914]: I0127 14:17:06.444279 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tjprh"] Jan 27 14:17:06 crc kubenswrapper[4914]: I0127 14:17:06.545320 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l52ml"] Jan 27 14:17:06 crc kubenswrapper[4914]: I0127 14:17:06.545585 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l52ml" podUID="3cd8f086-fe10-40d4-a520-fbe48482af35" containerName="registry-server" containerID="cri-o://664ebdb7f31aca1c6a0f4ecf590f0245e1d0e3ed20fae6b087dfc1558f9053a9" gracePeriod=2 Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.067887 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l52ml" Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.181285 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cd8f086-fe10-40d4-a520-fbe48482af35-utilities\") pod \"3cd8f086-fe10-40d4-a520-fbe48482af35\" (UID: \"3cd8f086-fe10-40d4-a520-fbe48482af35\") " Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.181544 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cd8f086-fe10-40d4-a520-fbe48482af35-catalog-content\") pod \"3cd8f086-fe10-40d4-a520-fbe48482af35\" (UID: \"3cd8f086-fe10-40d4-a520-fbe48482af35\") " Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.181580 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plsb2\" (UniqueName: \"kubernetes.io/projected/3cd8f086-fe10-40d4-a520-fbe48482af35-kube-api-access-plsb2\") pod \"3cd8f086-fe10-40d4-a520-fbe48482af35\" (UID: \"3cd8f086-fe10-40d4-a520-fbe48482af35\") " Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.182064 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cd8f086-fe10-40d4-a520-fbe48482af35-utilities" (OuterVolumeSpecName: "utilities") pod "3cd8f086-fe10-40d4-a520-fbe48482af35" (UID: "3cd8f086-fe10-40d4-a520-fbe48482af35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.197481 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cd8f086-fe10-40d4-a520-fbe48482af35-kube-api-access-plsb2" (OuterVolumeSpecName: "kube-api-access-plsb2") pod "3cd8f086-fe10-40d4-a520-fbe48482af35" (UID: "3cd8f086-fe10-40d4-a520-fbe48482af35"). InnerVolumeSpecName "kube-api-access-plsb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.240547 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cd8f086-fe10-40d4-a520-fbe48482af35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cd8f086-fe10-40d4-a520-fbe48482af35" (UID: "3cd8f086-fe10-40d4-a520-fbe48482af35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.283576 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cd8f086-fe10-40d4-a520-fbe48482af35-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.283607 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plsb2\" (UniqueName: \"kubernetes.io/projected/3cd8f086-fe10-40d4-a520-fbe48482af35-kube-api-access-plsb2\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.283619 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cd8f086-fe10-40d4-a520-fbe48482af35-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.554991 4914 generic.go:334] "Generic (PLEG): container finished" podID="3cd8f086-fe10-40d4-a520-fbe48482af35" containerID="664ebdb7f31aca1c6a0f4ecf590f0245e1d0e3ed20fae6b087dfc1558f9053a9" exitCode=0 Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.555080 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l52ml" Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.555107 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l52ml" event={"ID":"3cd8f086-fe10-40d4-a520-fbe48482af35","Type":"ContainerDied","Data":"664ebdb7f31aca1c6a0f4ecf590f0245e1d0e3ed20fae6b087dfc1558f9053a9"} Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.555536 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l52ml" event={"ID":"3cd8f086-fe10-40d4-a520-fbe48482af35","Type":"ContainerDied","Data":"7806ff7afc4d1b885fbfaba725060002f676e16f918da04a80dd59aa9eac48bd"} Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.555578 4914 scope.go:117] "RemoveContainer" containerID="664ebdb7f31aca1c6a0f4ecf590f0245e1d0e3ed20fae6b087dfc1558f9053a9" Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.601013 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l52ml"] Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.609659 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l52ml"] Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.835465 4914 scope.go:117] "RemoveContainer" containerID="0d0dd34bea36d441aebd2f13585fde58b6371db7710c12a11318bba420d02904" Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.885188 4914 scope.go:117] "RemoveContainer" containerID="a07894d475f27a8ef59422c1e594dfd50f55e8de2e18d7e8f4a9caab1df812eb" Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.995538 4914 scope.go:117] "RemoveContainer" containerID="664ebdb7f31aca1c6a0f4ecf590f0245e1d0e3ed20fae6b087dfc1558f9053a9" Jan 27 14:17:07 crc kubenswrapper[4914]: E0127 14:17:07.997191 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"664ebdb7f31aca1c6a0f4ecf590f0245e1d0e3ed20fae6b087dfc1558f9053a9\": container with ID starting with 664ebdb7f31aca1c6a0f4ecf590f0245e1d0e3ed20fae6b087dfc1558f9053a9 not found: ID does not exist" containerID="664ebdb7f31aca1c6a0f4ecf590f0245e1d0e3ed20fae6b087dfc1558f9053a9" Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.997229 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"664ebdb7f31aca1c6a0f4ecf590f0245e1d0e3ed20fae6b087dfc1558f9053a9"} err="failed to get container status \"664ebdb7f31aca1c6a0f4ecf590f0245e1d0e3ed20fae6b087dfc1558f9053a9\": rpc error: code = NotFound desc = could not find container \"664ebdb7f31aca1c6a0f4ecf590f0245e1d0e3ed20fae6b087dfc1558f9053a9\": container with ID starting with 664ebdb7f31aca1c6a0f4ecf590f0245e1d0e3ed20fae6b087dfc1558f9053a9 not found: ID does not exist" Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.997257 4914 scope.go:117] "RemoveContainer" containerID="0d0dd34bea36d441aebd2f13585fde58b6371db7710c12a11318bba420d02904" Jan 27 14:17:07 crc kubenswrapper[4914]: E0127 14:17:07.997627 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d0dd34bea36d441aebd2f13585fde58b6371db7710c12a11318bba420d02904\": container with ID starting with 0d0dd34bea36d441aebd2f13585fde58b6371db7710c12a11318bba420d02904 not found: ID does not exist" containerID="0d0dd34bea36d441aebd2f13585fde58b6371db7710c12a11318bba420d02904" Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.997652 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d0dd34bea36d441aebd2f13585fde58b6371db7710c12a11318bba420d02904"} err="failed to get container status \"0d0dd34bea36d441aebd2f13585fde58b6371db7710c12a11318bba420d02904\": rpc error: code = NotFound desc = could not find container \"0d0dd34bea36d441aebd2f13585fde58b6371db7710c12a11318bba420d02904\": container with ID starting with 0d0dd34bea36d441aebd2f13585fde58b6371db7710c12a11318bba420d02904 not found: ID does not exist" Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.997668 4914 scope.go:117] "RemoveContainer" containerID="a07894d475f27a8ef59422c1e594dfd50f55e8de2e18d7e8f4a9caab1df812eb" Jan 27 14:17:07 crc kubenswrapper[4914]: E0127 14:17:07.997965 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07894d475f27a8ef59422c1e594dfd50f55e8de2e18d7e8f4a9caab1df812eb\": container with ID starting with a07894d475f27a8ef59422c1e594dfd50f55e8de2e18d7e8f4a9caab1df812eb not found: ID does not exist" containerID="a07894d475f27a8ef59422c1e594dfd50f55e8de2e18d7e8f4a9caab1df812eb" Jan 27 14:17:07 crc kubenswrapper[4914]: I0127 14:17:07.998016 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07894d475f27a8ef59422c1e594dfd50f55e8de2e18d7e8f4a9caab1df812eb"} err="failed to get container status \"a07894d475f27a8ef59422c1e594dfd50f55e8de2e18d7e8f4a9caab1df812eb\": rpc error: code = NotFound desc = could not find container \"a07894d475f27a8ef59422c1e594dfd50f55e8de2e18d7e8f4a9caab1df812eb\": container with ID starting with a07894d475f27a8ef59422c1e594dfd50f55e8de2e18d7e8f4a9caab1df812eb not found: ID does not exist" Jan 27 14:17:08 crc kubenswrapper[4914]: I0127 14:17:08.313269 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cd8f086-fe10-40d4-a520-fbe48482af35" path="/var/lib/kubelet/pods/3cd8f086-fe10-40d4-a520-fbe48482af35/volumes" Jan 27 14:17:11 crc kubenswrapper[4914]: I0127 14:17:11.048655 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xbz4b"] Jan 27 14:17:11 crc kubenswrapper[4914]: I0127 14:17:11.057230 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xbz4b"] Jan 27 14:17:12 crc kubenswrapper[4914]: I0127 14:17:12.305795 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58d81784-ad81-47ce-befb-d2ec09617b1c" path="/var/lib/kubelet/pods/58d81784-ad81-47ce-befb-d2ec09617b1c/volumes" Jan 27 14:17:14 crc kubenswrapper[4914]: I0127 14:17:14.533151 4914 scope.go:117] "RemoveContainer" containerID="b284019a99a6f8755cd21a5c4044ffecd831e19c7b7182c85436ca498b081813" Jan 27 14:17:14 crc kubenswrapper[4914]: I0127 14:17:14.555331 4914 scope.go:117] "RemoveContainer" containerID="506390ca40c1346c82018fa458facd197e0d38af2a54f4ecc82d3415332459b1" Jan 27 14:17:14 crc kubenswrapper[4914]: I0127 14:17:14.602188 4914 scope.go:117] "RemoveContainer" containerID="c4a240aee59a7c7c3e19351e4a56d3db9f319196b8c7e37c539352c8ced4ebb9" Jan 27 14:17:14 crc kubenswrapper[4914]: I0127 14:17:14.647814 4914 scope.go:117] "RemoveContainer" containerID="6ec8deadd6a92a40eb61f5e84ff91f0b61711bd06fd87c23520319b51731217b" Jan 27 14:17:14 crc kubenswrapper[4914]: I0127 14:17:14.703464 4914 scope.go:117] "RemoveContainer" containerID="53b4f8943c226a7eb989900d3f6a4344f58eb74ddc02eb261fec8885ba42dc09" Jan 27 14:17:14 crc kubenswrapper[4914]: I0127 14:17:14.747528 4914 scope.go:117] "RemoveContainer" containerID="bb3a74d6b4bb7b0425c6522a662de5b37d55c95b0ee9de72aa1e4becfab41a82" Jan 27 14:17:14 crc kubenswrapper[4914]: I0127 14:17:14.794964 4914 scope.go:117] "RemoveContainer" containerID="d3a53c20290ae91c6599b263904d53c65f4ebaa727da8689e26c279ecee27a54" Jan 27 14:17:29 crc kubenswrapper[4914]: I0127 14:17:29.045345 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-hvsxg"] Jan 27 14:17:29 crc kubenswrapper[4914]: I0127 14:17:29.057877 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-hvsxg"] Jan 27 14:17:30 crc kubenswrapper[4914]: I0127 14:17:30.309862 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b3f7735-ffe2-40bc-9055-67f89a4a3a95" path="/var/lib/kubelet/pods/3b3f7735-ffe2-40bc-9055-67f89a4a3a95/volumes" Jan 27 14:17:31 crc kubenswrapper[4914]: I0127 14:17:31.039325 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m6sr7"] Jan 27 14:17:31 crc kubenswrapper[4914]: I0127 14:17:31.047390 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m6sr7"] Jan 27 14:17:32 crc kubenswrapper[4914]: I0127 14:17:32.307048 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d67207a-f8f7-4b0d-aa50-be147a8ba810" path="/var/lib/kubelet/pods/1d67207a-f8f7-4b0d-aa50-be147a8ba810/volumes" Jan 27 14:17:37 crc kubenswrapper[4914]: I0127 14:17:37.690700 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:17:37 crc kubenswrapper[4914]: I0127 14:17:37.692030 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:17:43 crc kubenswrapper[4914]: I0127 14:17:43.893427 4914 generic.go:334] "Generic (PLEG): container finished" podID="bdc53bfd-51de-436e-837e-bfc1186f706f" containerID="64948298afb33aae288f8d38b3cb0832ec8830829ec9411502521dc4d8027337" exitCode=0 Jan 27 14:17:43 crc kubenswrapper[4914]: I0127 14:17:43.893536 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" event={"ID":"bdc53bfd-51de-436e-837e-bfc1186f706f","Type":"ContainerDied","Data":"64948298afb33aae288f8d38b3cb0832ec8830829ec9411502521dc4d8027337"} Jan 27 14:17:45 crc kubenswrapper[4914]: I0127 14:17:45.912545 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" event={"ID":"bdc53bfd-51de-436e-837e-bfc1186f706f","Type":"ContainerDied","Data":"c607bb08ad5e1d89753333dd0ab8a779b6be565a83502e0aa4daad9541ffe12b"} Jan 27 14:17:45 crc kubenswrapper[4914]: I0127 14:17:45.913310 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c607bb08ad5e1d89753333dd0ab8a779b6be565a83502e0aa4daad9541ffe12b" Jan 27 14:17:45 crc kubenswrapper[4914]: I0127 14:17:45.914526 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" Jan 27 14:17:45 crc kubenswrapper[4914]: I0127 14:17:45.989356 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdc53bfd-51de-436e-837e-bfc1186f706f-inventory\") pod \"bdc53bfd-51de-436e-837e-bfc1186f706f\" (UID: \"bdc53bfd-51de-436e-837e-bfc1186f706f\") " Jan 27 14:17:45 crc kubenswrapper[4914]: I0127 14:17:45.989443 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25p44\" (UniqueName: \"kubernetes.io/projected/bdc53bfd-51de-436e-837e-bfc1186f706f-kube-api-access-25p44\") pod \"bdc53bfd-51de-436e-837e-bfc1186f706f\" (UID: \"bdc53bfd-51de-436e-837e-bfc1186f706f\") " Jan 27 14:17:45 crc kubenswrapper[4914]: I0127 14:17:45.989632 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdc53bfd-51de-436e-837e-bfc1186f706f-ssh-key-openstack-edpm-ipam\") pod \"bdc53bfd-51de-436e-837e-bfc1186f706f\" (UID: \"bdc53bfd-51de-436e-837e-bfc1186f706f\") " Jan 27 14:17:45 crc kubenswrapper[4914]: I0127 14:17:45.996216 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc53bfd-51de-436e-837e-bfc1186f706f-kube-api-access-25p44" (OuterVolumeSpecName: "kube-api-access-25p44") pod "bdc53bfd-51de-436e-837e-bfc1186f706f" (UID: "bdc53bfd-51de-436e-837e-bfc1186f706f"). InnerVolumeSpecName "kube-api-access-25p44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.016309 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdc53bfd-51de-436e-837e-bfc1186f706f-inventory" (OuterVolumeSpecName: "inventory") pod "bdc53bfd-51de-436e-837e-bfc1186f706f" (UID: "bdc53bfd-51de-436e-837e-bfc1186f706f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.024798 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdc53bfd-51de-436e-837e-bfc1186f706f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bdc53bfd-51de-436e-837e-bfc1186f706f" (UID: "bdc53bfd-51de-436e-837e-bfc1186f706f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.091904 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdc53bfd-51de-436e-837e-bfc1186f706f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.091992 4914 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdc53bfd-51de-436e-837e-bfc1186f706f-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.092003 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25p44\" (UniqueName: \"kubernetes.io/projected/bdc53bfd-51de-436e-837e-bfc1186f706f-kube-api-access-25p44\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.618625 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6"] Jan 27 14:17:46 crc kubenswrapper[4914]: E0127 14:17:46.619473 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cd8f086-fe10-40d4-a520-fbe48482af35" containerName="extract-utilities" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.619496 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cd8f086-fe10-40d4-a520-fbe48482af35" containerName="extract-utilities" Jan 27 14:17:46 crc kubenswrapper[4914]: E0127 14:17:46.619512 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc53bfd-51de-436e-837e-bfc1186f706f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.619522 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc53bfd-51de-436e-837e-bfc1186f706f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 14:17:46 crc kubenswrapper[4914]: E0127 14:17:46.619537 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cd8f086-fe10-40d4-a520-fbe48482af35" containerName="registry-server" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.619546 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cd8f086-fe10-40d4-a520-fbe48482af35" containerName="registry-server" Jan 27 14:17:46 crc kubenswrapper[4914]: E0127 14:17:46.619585 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cd8f086-fe10-40d4-a520-fbe48482af35" containerName="extract-content" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.619593 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cd8f086-fe10-40d4-a520-fbe48482af35" containerName="extract-content" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.619801 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cd8f086-fe10-40d4-a520-fbe48482af35" containerName="registry-server" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.619845 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdc53bfd-51de-436e-837e-bfc1186f706f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.620609 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.629085 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6"] Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.703010 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk4mx\" (UniqueName: \"kubernetes.io/projected/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-kube-api-access-bk4mx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6\" (UID: \"9ef8835d-5ed1-428f-899f-45c41c5ffb4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.703122 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6\" (UID: \"9ef8835d-5ed1-428f-899f-45c41c5ffb4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.703168 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6\" (UID: \"9ef8835d-5ed1-428f-899f-45c41c5ffb4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.804308 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk4mx\" (UniqueName: \"kubernetes.io/projected/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-kube-api-access-bk4mx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6\" (UID: \"9ef8835d-5ed1-428f-899f-45c41c5ffb4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.804951 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6\" (UID: \"9ef8835d-5ed1-428f-899f-45c41c5ffb4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.806009 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6\" (UID: \"9ef8835d-5ed1-428f-899f-45c41c5ffb4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.809685 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6\" (UID: \"9ef8835d-5ed1-428f-899f-45c41c5ffb4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.809709 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6\" (UID: \"9ef8835d-5ed1-428f-899f-45c41c5ffb4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.822164 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk4mx\" (UniqueName: \"kubernetes.io/projected/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-kube-api-access-bk4mx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6\" (UID: \"9ef8835d-5ed1-428f-899f-45c41c5ffb4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.924555 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h" Jan 27 14:17:46 crc kubenswrapper[4914]: I0127 14:17:46.944277 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" Jan 27 14:17:47 crc kubenswrapper[4914]: I0127 14:17:47.453809 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6"] Jan 27 14:17:47 crc kubenswrapper[4914]: I0127 14:17:47.936439 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" event={"ID":"9ef8835d-5ed1-428f-899f-45c41c5ffb4e","Type":"ContainerStarted","Data":"b54c94b9c9fb26eaa14deac50b00bc0011cae58fa882fe92cd5a9f1c75985d75"} Jan 27 14:17:48 crc kubenswrapper[4914]: I0127 14:17:48.951008 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" event={"ID":"9ef8835d-5ed1-428f-899f-45c41c5ffb4e","Type":"ContainerStarted","Data":"9d0856213c5afe66c614e240a28ca6ebed8f975d4b0d7082b6010e02ade84b31"} Jan 27 14:17:48 crc kubenswrapper[4914]: I0127 14:17:48.969856 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" podStartSLOduration=2.492001832 podStartE2EDuration="2.969821138s" podCreationTimestamp="2026-01-27 14:17:46 +0000 UTC" firstStartedPulling="2026-01-27 14:17:47.460791143 +0000 UTC m=+2025.773141228" lastFinishedPulling="2026-01-27 14:17:47.938610449 +0000 UTC m=+2026.250960534" observedRunningTime="2026-01-27 14:17:48.968468292 +0000 UTC m=+2027.280818447" watchObservedRunningTime="2026-01-27 14:17:48.969821138 +0000 UTC m=+2027.282171223" Jan 27 14:18:07 crc kubenswrapper[4914]: I0127 14:18:07.691050 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:18:07 crc kubenswrapper[4914]: I0127 14:18:07.691594 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:18:14 crc kubenswrapper[4914]: I0127 14:18:14.039179 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-gzxfk"] Jan 27 14:18:14 crc kubenswrapper[4914]: I0127 14:18:14.046929 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-gzxfk"] Jan 27 14:18:14 crc kubenswrapper[4914]: I0127 14:18:14.308118 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc2c043b-2bd3-4238-bcd2-f44f4191cad8" path="/var/lib/kubelet/pods/fc2c043b-2bd3-4238-bcd2-f44f4191cad8/volumes" Jan 27 14:18:14 crc kubenswrapper[4914]: I0127 14:18:14.958238 4914 scope.go:117] "RemoveContainer" containerID="4fab63ecbb2167d4aa10709d657a7afa1815e139ebae6fe7d1cb9e61155c09df" Jan 27 14:18:15 crc kubenswrapper[4914]: I0127 14:18:15.009793 4914 scope.go:117] "RemoveContainer" containerID="7771ca1de9d47c34ff125fa812e479371d69aae563a4f8ff18c5927911fe867c" Jan 27 14:18:15 crc kubenswrapper[4914]: I0127 14:18:15.064336 4914 scope.go:117] "RemoveContainer" containerID="55712590807c72cd4dc50ff38ec9f3e3a3dfe9138a73ee090b854e60939b247b" Jan 27 14:18:37 crc kubenswrapper[4914]: I0127 14:18:37.691498 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:18:37 crc kubenswrapper[4914]: I0127 14:18:37.692082 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:18:37 crc kubenswrapper[4914]: I0127 14:18:37.692123 4914 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 14:18:37 crc kubenswrapper[4914]: I0127 14:18:37.692613 4914 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f659cd5c9a3ab8758da4b24efeed5972f9d7d7fb86a73f395650bf561d77e063"} pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:18:37 crc kubenswrapper[4914]: I0127 14:18:37.692668 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" containerID="cri-o://f659cd5c9a3ab8758da4b24efeed5972f9d7d7fb86a73f395650bf561d77e063" gracePeriod=600 Jan 27 14:18:38 crc kubenswrapper[4914]: I0127 14:18:38.347905 4914 generic.go:334] "Generic (PLEG): container finished" podID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerID="f659cd5c9a3ab8758da4b24efeed5972f9d7d7fb86a73f395650bf561d77e063" exitCode=0 Jan 27 14:18:38 crc kubenswrapper[4914]: I0127 14:18:38.347957 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerDied","Data":"f659cd5c9a3ab8758da4b24efeed5972f9d7d7fb86a73f395650bf561d77e063"} Jan 27 14:18:38 crc kubenswrapper[4914]: I0127 14:18:38.348474 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerStarted","Data":"bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc"} Jan 27 14:18:38 crc kubenswrapper[4914]: I0127 14:18:38.348495 4914 scope.go:117] "RemoveContainer" containerID="a86405851ac8ff824eac26a42b64973c87114825a28766ce4f51a97133b82771" Jan 27 14:18:57 crc kubenswrapper[4914]: I0127 14:18:57.503821 4914 generic.go:334] "Generic (PLEG): container finished" podID="9ef8835d-5ed1-428f-899f-45c41c5ffb4e" containerID="9d0856213c5afe66c614e240a28ca6ebed8f975d4b0d7082b6010e02ade84b31" exitCode=0 Jan 27 14:18:57 crc kubenswrapper[4914]: I0127 14:18:57.504203 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" event={"ID":"9ef8835d-5ed1-428f-899f-45c41c5ffb4e","Type":"ContainerDied","Data":"9d0856213c5afe66c614e240a28ca6ebed8f975d4b0d7082b6010e02ade84b31"} Jan 27 14:18:58 crc kubenswrapper[4914]: I0127 14:18:58.930466 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.079640 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-inventory\") pod \"9ef8835d-5ed1-428f-899f-45c41c5ffb4e\" (UID: \"9ef8835d-5ed1-428f-899f-45c41c5ffb4e\") " Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.079780 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-ssh-key-openstack-edpm-ipam\") pod \"9ef8835d-5ed1-428f-899f-45c41c5ffb4e\" (UID: \"9ef8835d-5ed1-428f-899f-45c41c5ffb4e\") " Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.079875 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk4mx\" (UniqueName: \"kubernetes.io/projected/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-kube-api-access-bk4mx\") pod \"9ef8835d-5ed1-428f-899f-45c41c5ffb4e\" (UID: \"9ef8835d-5ed1-428f-899f-45c41c5ffb4e\") " Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.085875 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-kube-api-access-bk4mx" (OuterVolumeSpecName: "kube-api-access-bk4mx") pod "9ef8835d-5ed1-428f-899f-45c41c5ffb4e" (UID: "9ef8835d-5ed1-428f-899f-45c41c5ffb4e"). InnerVolumeSpecName "kube-api-access-bk4mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.109911 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-inventory" (OuterVolumeSpecName: "inventory") pod "9ef8835d-5ed1-428f-899f-45c41c5ffb4e" (UID: "9ef8835d-5ed1-428f-899f-45c41c5ffb4e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.113791 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9ef8835d-5ed1-428f-899f-45c41c5ffb4e" (UID: "9ef8835d-5ed1-428f-899f-45c41c5ffb4e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.182969 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk4mx\" (UniqueName: \"kubernetes.io/projected/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-kube-api-access-bk4mx\") on node \"crc\" DevicePath \"\"" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.183022 4914 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.183035 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef8835d-5ed1-428f-899f-45c41c5ffb4e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.526280 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" event={"ID":"9ef8835d-5ed1-428f-899f-45c41c5ffb4e","Type":"ContainerDied","Data":"b54c94b9c9fb26eaa14deac50b00bc0011cae58fa882fe92cd5a9f1c75985d75"} Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.526574 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b54c94b9c9fb26eaa14deac50b00bc0011cae58fa882fe92cd5a9f1c75985d75" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.526625 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.623146 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb"] Jan 27 14:18:59 crc kubenswrapper[4914]: E0127 14:18:59.623534 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef8835d-5ed1-428f-899f-45c41c5ffb4e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.623550 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef8835d-5ed1-428f-899f-45c41c5ffb4e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.623714 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef8835d-5ed1-428f-899f-45c41c5ffb4e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.624330 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.626821 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.629255 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.630595 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.630683 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m5jxs" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.647407 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb"] Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.696211 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb\" (UID: \"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.696303 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbh6l\" (UniqueName: \"kubernetes.io/projected/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-kube-api-access-fbh6l\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb\" (UID: \"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.696330 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb\" (UID: \"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.797890 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb\" (UID: \"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.798093 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb\" (UID: \"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.798195 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbh6l\" (UniqueName: \"kubernetes.io/projected/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-kube-api-access-fbh6l\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb\" (UID: \"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.819375 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbh6l\" (UniqueName: \"kubernetes.io/projected/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-kube-api-access-fbh6l\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb\" (UID: \"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.820191 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb\" (UID: \"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.823398 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb\" (UID: \"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" Jan 27 14:18:59 crc kubenswrapper[4914]: I0127 14:18:59.947197 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" Jan 27 14:19:00 crc kubenswrapper[4914]: I0127 14:19:00.553377 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb"] Jan 27 14:19:01 crc kubenswrapper[4914]: I0127 14:19:01.546422 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" event={"ID":"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9","Type":"ContainerStarted","Data":"0e53b9b6e312f6a79ea7896cd149c2ff251ecaac9ed69ac2beb84513817a4ca7"} Jan 27 14:19:01 crc kubenswrapper[4914]: I0127 14:19:01.546909 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" event={"ID":"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9","Type":"ContainerStarted","Data":"8754c41042483f863238c88e39446b64fa7a2813ecb0aefe1ef9447e6c86cd4a"} Jan 27 14:19:01 crc kubenswrapper[4914]: I0127 14:19:01.561958 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" podStartSLOduration=1.832640444 podStartE2EDuration="2.561941393s" podCreationTimestamp="2026-01-27 14:18:59 +0000 UTC" firstStartedPulling="2026-01-27 14:19:00.559913432 +0000 UTC m=+2098.872263517" lastFinishedPulling="2026-01-27 14:19:01.289214381 +0000 UTC m=+2099.601564466" observedRunningTime="2026-01-27 14:19:01.561326736 +0000 UTC m=+2099.873676821" watchObservedRunningTime="2026-01-27 14:19:01.561941393 +0000 UTC m=+2099.874291478" Jan 27 14:19:06 crc kubenswrapper[4914]: I0127 14:19:06.635888 4914 generic.go:334] "Generic (PLEG): container finished" podID="e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9" containerID="0e53b9b6e312f6a79ea7896cd149c2ff251ecaac9ed69ac2beb84513817a4ca7" exitCode=0 Jan 27 14:19:06 crc kubenswrapper[4914]: I0127 14:19:06.636095 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" event={"ID":"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9","Type":"ContainerDied","Data":"0e53b9b6e312f6a79ea7896cd149c2ff251ecaac9ed69ac2beb84513817a4ca7"} Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.025805 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.083272 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-inventory\") pod \"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9\" (UID: \"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9\") " Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.083692 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbh6l\" (UniqueName: \"kubernetes.io/projected/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-kube-api-access-fbh6l\") pod \"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9\" (UID: \"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9\") " Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.083877 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-ssh-key-openstack-edpm-ipam\") pod \"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9\" (UID: \"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9\") " Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.093822 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-kube-api-access-fbh6l" (OuterVolumeSpecName: "kube-api-access-fbh6l") pod "e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9" (UID: "e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9"). InnerVolumeSpecName "kube-api-access-fbh6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.119427 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9" (UID: "e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.120549 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-inventory" (OuterVolumeSpecName: "inventory") pod "e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9" (UID: "e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.185966 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.186017 4914 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.186035 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbh6l\" (UniqueName: \"kubernetes.io/projected/e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9-kube-api-access-fbh6l\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.652054 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" event={"ID":"e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9","Type":"ContainerDied","Data":"8754c41042483f863238c88e39446b64fa7a2813ecb0aefe1ef9447e6c86cd4a"} Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.652097 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8754c41042483f863238c88e39446b64fa7a2813ecb0aefe1ef9447e6c86cd4a" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.652486 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.722731 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59"] Jan 27 14:19:08 crc kubenswrapper[4914]: E0127 14:19:08.723201 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.723219 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.723414 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.724057 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.727074 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.727307 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.727461 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.727697 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m5jxs" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.736796 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59"] Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.797056 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00e18877-2928-4039-b2e4-562989a3cdb5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlf59\" (UID: \"00e18877-2928-4039-b2e4-562989a3cdb5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.797369 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00e18877-2928-4039-b2e4-562989a3cdb5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlf59\" (UID: \"00e18877-2928-4039-b2e4-562989a3cdb5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.797580 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdvxp\" (UniqueName: \"kubernetes.io/projected/00e18877-2928-4039-b2e4-562989a3cdb5-kube-api-access-zdvxp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlf59\" (UID: \"00e18877-2928-4039-b2e4-562989a3cdb5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.899300 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00e18877-2928-4039-b2e4-562989a3cdb5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlf59\" (UID: \"00e18877-2928-4039-b2e4-562989a3cdb5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.899402 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdvxp\" (UniqueName: \"kubernetes.io/projected/00e18877-2928-4039-b2e4-562989a3cdb5-kube-api-access-zdvxp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlf59\" (UID: \"00e18877-2928-4039-b2e4-562989a3cdb5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.899492 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00e18877-2928-4039-b2e4-562989a3cdb5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlf59\" (UID: \"00e18877-2928-4039-b2e4-562989a3cdb5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.903392 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00e18877-2928-4039-b2e4-562989a3cdb5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlf59\" (UID: \"00e18877-2928-4039-b2e4-562989a3cdb5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.904185 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00e18877-2928-4039-b2e4-562989a3cdb5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlf59\" (UID: \"00e18877-2928-4039-b2e4-562989a3cdb5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" Jan 27 14:19:08 crc kubenswrapper[4914]: I0127 14:19:08.916623 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdvxp\" (UniqueName: \"kubernetes.io/projected/00e18877-2928-4039-b2e4-562989a3cdb5-kube-api-access-zdvxp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlf59\" (UID: \"00e18877-2928-4039-b2e4-562989a3cdb5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" Jan 27 14:19:09 crc kubenswrapper[4914]: I0127 14:19:09.049032 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" Jan 27 14:19:09 crc kubenswrapper[4914]: I0127 14:19:09.555468 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59"] Jan 27 14:19:09 crc kubenswrapper[4914]: I0127 14:19:09.660300 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" event={"ID":"00e18877-2928-4039-b2e4-562989a3cdb5","Type":"ContainerStarted","Data":"7543da7010439e6d7c5405e15ad4092e6046806aea338d0ca4374568de2b737d"} Jan 27 14:19:10 crc kubenswrapper[4914]: I0127 14:19:10.671365 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" event={"ID":"00e18877-2928-4039-b2e4-562989a3cdb5","Type":"ContainerStarted","Data":"abcce87510daef56d606d73912cdb8c1c45ed0c1d13eef2015b3777c2548b5b8"} Jan 27 14:19:10 crc kubenswrapper[4914]: I0127 14:19:10.695143 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" podStartSLOduration=1.941416975 podStartE2EDuration="2.695123801s" podCreationTimestamp="2026-01-27 14:19:08 +0000 UTC" firstStartedPulling="2026-01-27 14:19:09.558373608 +0000 UTC m=+2107.870723703" lastFinishedPulling="2026-01-27 14:19:10.312080454 +0000 UTC m=+2108.624430529" observedRunningTime="2026-01-27 14:19:10.689228189 +0000 UTC m=+2109.001578274" watchObservedRunningTime="2026-01-27 14:19:10.695123801 +0000 UTC m=+2109.007473886" Jan 27 14:19:25 crc kubenswrapper[4914]: I0127 14:19:25.996436 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p2jxt"] Jan 27 14:19:25 crc kubenswrapper[4914]: I0127 14:19:25.998734 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:26 crc kubenswrapper[4914]: I0127 14:19:26.007145 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p2jxt"] Jan 27 14:19:26 crc kubenswrapper[4914]: I0127 14:19:26.142236 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cb2db8e-4bbe-4530-970a-80a67f950e08-utilities\") pod \"redhat-operators-p2jxt\" (UID: \"6cb2db8e-4bbe-4530-970a-80a67f950e08\") " pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:26 crc kubenswrapper[4914]: I0127 14:19:26.142305 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdqpx\" (UniqueName: \"kubernetes.io/projected/6cb2db8e-4bbe-4530-970a-80a67f950e08-kube-api-access-cdqpx\") pod \"redhat-operators-p2jxt\" (UID: \"6cb2db8e-4bbe-4530-970a-80a67f950e08\") " pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:26 crc kubenswrapper[4914]: I0127 14:19:26.142468 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cb2db8e-4bbe-4530-970a-80a67f950e08-catalog-content\") pod \"redhat-operators-p2jxt\" (UID: \"6cb2db8e-4bbe-4530-970a-80a67f950e08\") " pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:26 crc kubenswrapper[4914]: I0127 14:19:26.244903 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cb2db8e-4bbe-4530-970a-80a67f950e08-catalog-content\") pod \"redhat-operators-p2jxt\" (UID: \"6cb2db8e-4bbe-4530-970a-80a67f950e08\") " pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:26 crc kubenswrapper[4914]: I0127 14:19:26.245111 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cb2db8e-4bbe-4530-970a-80a67f950e08-utilities\") pod \"redhat-operators-p2jxt\" (UID: \"6cb2db8e-4bbe-4530-970a-80a67f950e08\") " pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:26 crc kubenswrapper[4914]: I0127 14:19:26.245167 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdqpx\" (UniqueName: \"kubernetes.io/projected/6cb2db8e-4bbe-4530-970a-80a67f950e08-kube-api-access-cdqpx\") pod \"redhat-operators-p2jxt\" (UID: \"6cb2db8e-4bbe-4530-970a-80a67f950e08\") " pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:26 crc kubenswrapper[4914]: I0127 14:19:26.246070 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cb2db8e-4bbe-4530-970a-80a67f950e08-catalog-content\") pod \"redhat-operators-p2jxt\" (UID: \"6cb2db8e-4bbe-4530-970a-80a67f950e08\") " pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:26 crc kubenswrapper[4914]: I0127 14:19:26.246218 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cb2db8e-4bbe-4530-970a-80a67f950e08-utilities\") pod \"redhat-operators-p2jxt\" (UID: \"6cb2db8e-4bbe-4530-970a-80a67f950e08\") " pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:26 crc kubenswrapper[4914]: I0127 14:19:26.267027 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdqpx\" (UniqueName: \"kubernetes.io/projected/6cb2db8e-4bbe-4530-970a-80a67f950e08-kube-api-access-cdqpx\") pod \"redhat-operators-p2jxt\" (UID: \"6cb2db8e-4bbe-4530-970a-80a67f950e08\") " pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:26 crc kubenswrapper[4914]: I0127 14:19:26.337946 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:26 crc kubenswrapper[4914]: I0127 14:19:26.803309 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p2jxt"] Jan 27 14:19:26 crc kubenswrapper[4914]: W0127 14:19:26.806311 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cb2db8e_4bbe_4530_970a_80a67f950e08.slice/crio-f3f641fa7f591646df51d73e682945583de610f6a54120b9915be436a033705f WatchSource:0}: Error finding container f3f641fa7f591646df51d73e682945583de610f6a54120b9915be436a033705f: Status 404 returned error can't find the container with id f3f641fa7f591646df51d73e682945583de610f6a54120b9915be436a033705f Jan 27 14:19:27 crc kubenswrapper[4914]: I0127 14:19:27.808437 4914 generic.go:334] "Generic (PLEG): container finished" podID="6cb2db8e-4bbe-4530-970a-80a67f950e08" containerID="7d1352c1b9947c49e5a92e3d0ca3b2de4e3564faf31fdd00ed3b44d883ad83fd" exitCode=0 Jan 27 14:19:27 crc kubenswrapper[4914]: I0127 14:19:27.808480 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2jxt" event={"ID":"6cb2db8e-4bbe-4530-970a-80a67f950e08","Type":"ContainerDied","Data":"7d1352c1b9947c49e5a92e3d0ca3b2de4e3564faf31fdd00ed3b44d883ad83fd"} Jan 27 14:19:27 crc kubenswrapper[4914]: I0127 14:19:27.808507 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2jxt" event={"ID":"6cb2db8e-4bbe-4530-970a-80a67f950e08","Type":"ContainerStarted","Data":"f3f641fa7f591646df51d73e682945583de610f6a54120b9915be436a033705f"} Jan 27 14:19:28 crc kubenswrapper[4914]: I0127 14:19:28.818776 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2jxt" event={"ID":"6cb2db8e-4bbe-4530-970a-80a67f950e08","Type":"ContainerStarted","Data":"f1769a1c1dda55ad5966912e7a9f1ccfcc4bfa454ad392af059b99bbb67a6dac"} Jan 27 14:19:29 crc kubenswrapper[4914]: I0127 14:19:29.833921 4914 generic.go:334] "Generic (PLEG): container finished" podID="6cb2db8e-4bbe-4530-970a-80a67f950e08" containerID="f1769a1c1dda55ad5966912e7a9f1ccfcc4bfa454ad392af059b99bbb67a6dac" exitCode=0 Jan 27 14:19:29 crc kubenswrapper[4914]: I0127 14:19:29.834001 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2jxt" event={"ID":"6cb2db8e-4bbe-4530-970a-80a67f950e08","Type":"ContainerDied","Data":"f1769a1c1dda55ad5966912e7a9f1ccfcc4bfa454ad392af059b99bbb67a6dac"} Jan 27 14:19:30 crc kubenswrapper[4914]: I0127 14:19:30.846234 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2jxt" event={"ID":"6cb2db8e-4bbe-4530-970a-80a67f950e08","Type":"ContainerStarted","Data":"6d740e1c59ee7c1ead64e6cae33ca3cb8d7a97c08a2a1e8daf2e362af065bbd3"} Jan 27 14:19:30 crc kubenswrapper[4914]: I0127 14:19:30.872417 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p2jxt" podStartSLOduration=3.451265261 podStartE2EDuration="5.872398873s" podCreationTimestamp="2026-01-27 14:19:25 +0000 UTC" firstStartedPulling="2026-01-27 14:19:27.811244632 +0000 UTC m=+2126.123594717" lastFinishedPulling="2026-01-27 14:19:30.232378244 +0000 UTC m=+2128.544728329" observedRunningTime="2026-01-27 14:19:30.864113366 +0000 UTC m=+2129.176463451" watchObservedRunningTime="2026-01-27 14:19:30.872398873 +0000 UTC m=+2129.184748958" Jan 27 14:19:36 crc kubenswrapper[4914]: I0127 14:19:36.338767 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:36 crc kubenswrapper[4914]: I0127 14:19:36.339273 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:36 crc kubenswrapper[4914]: I0127 14:19:36.419558 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:36 crc kubenswrapper[4914]: I0127 14:19:36.938765 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:36 crc kubenswrapper[4914]: I0127 14:19:36.984419 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p2jxt"] Jan 27 14:19:38 crc kubenswrapper[4914]: I0127 14:19:38.907527 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p2jxt" podUID="6cb2db8e-4bbe-4530-970a-80a67f950e08" containerName="registry-server" containerID="cri-o://6d740e1c59ee7c1ead64e6cae33ca3cb8d7a97c08a2a1e8daf2e362af065bbd3" gracePeriod=2 Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.318750 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.389912 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cb2db8e-4bbe-4530-970a-80a67f950e08-utilities\") pod \"6cb2db8e-4bbe-4530-970a-80a67f950e08\" (UID: \"6cb2db8e-4bbe-4530-970a-80a67f950e08\") " Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.389962 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cb2db8e-4bbe-4530-970a-80a67f950e08-catalog-content\") pod \"6cb2db8e-4bbe-4530-970a-80a67f950e08\" (UID: \"6cb2db8e-4bbe-4530-970a-80a67f950e08\") " Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.390134 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdqpx\" (UniqueName: \"kubernetes.io/projected/6cb2db8e-4bbe-4530-970a-80a67f950e08-kube-api-access-cdqpx\") pod \"6cb2db8e-4bbe-4530-970a-80a67f950e08\" (UID: \"6cb2db8e-4bbe-4530-970a-80a67f950e08\") " Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.391198 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cb2db8e-4bbe-4530-970a-80a67f950e08-utilities" (OuterVolumeSpecName: "utilities") pod "6cb2db8e-4bbe-4530-970a-80a67f950e08" (UID: "6cb2db8e-4bbe-4530-970a-80a67f950e08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.397575 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cb2db8e-4bbe-4530-970a-80a67f950e08-kube-api-access-cdqpx" (OuterVolumeSpecName: "kube-api-access-cdqpx") pod "6cb2db8e-4bbe-4530-970a-80a67f950e08" (UID: "6cb2db8e-4bbe-4530-970a-80a67f950e08"). InnerVolumeSpecName "kube-api-access-cdqpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.493696 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cb2db8e-4bbe-4530-970a-80a67f950e08-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.493748 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdqpx\" (UniqueName: \"kubernetes.io/projected/6cb2db8e-4bbe-4530-970a-80a67f950e08-kube-api-access-cdqpx\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.520196 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cb2db8e-4bbe-4530-970a-80a67f950e08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cb2db8e-4bbe-4530-970a-80a67f950e08" (UID: "6cb2db8e-4bbe-4530-970a-80a67f950e08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.598664 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cb2db8e-4bbe-4530-970a-80a67f950e08-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.918047 4914 generic.go:334] "Generic (PLEG): container finished" podID="6cb2db8e-4bbe-4530-970a-80a67f950e08" containerID="6d740e1c59ee7c1ead64e6cae33ca3cb8d7a97c08a2a1e8daf2e362af065bbd3" exitCode=0 Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.918142 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2jxt" Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.918160 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2jxt" event={"ID":"6cb2db8e-4bbe-4530-970a-80a67f950e08","Type":"ContainerDied","Data":"6d740e1c59ee7c1ead64e6cae33ca3cb8d7a97c08a2a1e8daf2e362af065bbd3"} Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.918434 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2jxt" event={"ID":"6cb2db8e-4bbe-4530-970a-80a67f950e08","Type":"ContainerDied","Data":"f3f641fa7f591646df51d73e682945583de610f6a54120b9915be436a033705f"} Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.918466 4914 scope.go:117] "RemoveContainer" containerID="6d740e1c59ee7c1ead64e6cae33ca3cb8d7a97c08a2a1e8daf2e362af065bbd3" Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.943549 4914 scope.go:117] "RemoveContainer" containerID="f1769a1c1dda55ad5966912e7a9f1ccfcc4bfa454ad392af059b99bbb67a6dac" Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.970464 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p2jxt"] Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.975818 4914 scope.go:117] "RemoveContainer" containerID="7d1352c1b9947c49e5a92e3d0ca3b2de4e3564faf31fdd00ed3b44d883ad83fd" Jan 27 14:19:39 crc kubenswrapper[4914]: I0127 14:19:39.977448 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p2jxt"] Jan 27 14:19:40 crc kubenswrapper[4914]: I0127 14:19:40.020604 4914 scope.go:117] "RemoveContainer" containerID="6d740e1c59ee7c1ead64e6cae33ca3cb8d7a97c08a2a1e8daf2e362af065bbd3" Jan 27 14:19:40 crc kubenswrapper[4914]: E0127 14:19:40.021253 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d740e1c59ee7c1ead64e6cae33ca3cb8d7a97c08a2a1e8daf2e362af065bbd3\": container with ID starting with 6d740e1c59ee7c1ead64e6cae33ca3cb8d7a97c08a2a1e8daf2e362af065bbd3 not found: ID does not exist" containerID="6d740e1c59ee7c1ead64e6cae33ca3cb8d7a97c08a2a1e8daf2e362af065bbd3" Jan 27 14:19:40 crc kubenswrapper[4914]: I0127 14:19:40.021292 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d740e1c59ee7c1ead64e6cae33ca3cb8d7a97c08a2a1e8daf2e362af065bbd3"} err="failed to get container status \"6d740e1c59ee7c1ead64e6cae33ca3cb8d7a97c08a2a1e8daf2e362af065bbd3\": rpc error: code = NotFound desc = could not find container \"6d740e1c59ee7c1ead64e6cae33ca3cb8d7a97c08a2a1e8daf2e362af065bbd3\": container with ID starting with 6d740e1c59ee7c1ead64e6cae33ca3cb8d7a97c08a2a1e8daf2e362af065bbd3 not found: ID does not exist" Jan 27 14:19:40 crc kubenswrapper[4914]: I0127 14:19:40.021319 4914 scope.go:117] "RemoveContainer" containerID="f1769a1c1dda55ad5966912e7a9f1ccfcc4bfa454ad392af059b99bbb67a6dac" Jan 27 14:19:40 crc kubenswrapper[4914]: E0127 14:19:40.021749 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1769a1c1dda55ad5966912e7a9f1ccfcc4bfa454ad392af059b99bbb67a6dac\": container with ID starting with f1769a1c1dda55ad5966912e7a9f1ccfcc4bfa454ad392af059b99bbb67a6dac not found: ID does not exist" containerID="f1769a1c1dda55ad5966912e7a9f1ccfcc4bfa454ad392af059b99bbb67a6dac" Jan 27 14:19:40 crc kubenswrapper[4914]: I0127 14:19:40.021795 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1769a1c1dda55ad5966912e7a9f1ccfcc4bfa454ad392af059b99bbb67a6dac"} err="failed to get container status \"f1769a1c1dda55ad5966912e7a9f1ccfcc4bfa454ad392af059b99bbb67a6dac\": rpc error: code = NotFound desc = could not find container \"f1769a1c1dda55ad5966912e7a9f1ccfcc4bfa454ad392af059b99bbb67a6dac\": container with ID starting with f1769a1c1dda55ad5966912e7a9f1ccfcc4bfa454ad392af059b99bbb67a6dac not found: ID does not exist" Jan 27 14:19:40 crc kubenswrapper[4914]: I0127 14:19:40.022075 4914 scope.go:117] "RemoveContainer" containerID="7d1352c1b9947c49e5a92e3d0ca3b2de4e3564faf31fdd00ed3b44d883ad83fd" Jan 27 14:19:40 crc kubenswrapper[4914]: E0127 14:19:40.022427 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d1352c1b9947c49e5a92e3d0ca3b2de4e3564faf31fdd00ed3b44d883ad83fd\": container with ID starting with 7d1352c1b9947c49e5a92e3d0ca3b2de4e3564faf31fdd00ed3b44d883ad83fd not found: ID does not exist" containerID="7d1352c1b9947c49e5a92e3d0ca3b2de4e3564faf31fdd00ed3b44d883ad83fd" Jan 27 14:19:40 crc kubenswrapper[4914]: I0127 14:19:40.022465 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d1352c1b9947c49e5a92e3d0ca3b2de4e3564faf31fdd00ed3b44d883ad83fd"} err="failed to get container status \"7d1352c1b9947c49e5a92e3d0ca3b2de4e3564faf31fdd00ed3b44d883ad83fd\": rpc error: code = NotFound desc = could not find container \"7d1352c1b9947c49e5a92e3d0ca3b2de4e3564faf31fdd00ed3b44d883ad83fd\": container with ID starting with 7d1352c1b9947c49e5a92e3d0ca3b2de4e3564faf31fdd00ed3b44d883ad83fd not found: ID does not exist" Jan 27 14:19:40 crc kubenswrapper[4914]: I0127 14:19:40.304433 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cb2db8e-4bbe-4530-970a-80a67f950e08" path="/var/lib/kubelet/pods/6cb2db8e-4bbe-4530-970a-80a67f950e08/volumes" Jan 27 14:19:45 crc kubenswrapper[4914]: I0127 14:19:45.973767 4914 generic.go:334] "Generic (PLEG): container finished" podID="00e18877-2928-4039-b2e4-562989a3cdb5" containerID="abcce87510daef56d606d73912cdb8c1c45ed0c1d13eef2015b3777c2548b5b8" exitCode=0 Jan 27 14:19:45 crc kubenswrapper[4914]: I0127 14:19:45.973870 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" event={"ID":"00e18877-2928-4039-b2e4-562989a3cdb5","Type":"ContainerDied","Data":"abcce87510daef56d606d73912cdb8c1c45ed0c1d13eef2015b3777c2548b5b8"} Jan 27 14:19:47 crc kubenswrapper[4914]: I0127 14:19:47.395887 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" Jan 27 14:19:47 crc kubenswrapper[4914]: I0127 14:19:47.551312 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdvxp\" (UniqueName: \"kubernetes.io/projected/00e18877-2928-4039-b2e4-562989a3cdb5-kube-api-access-zdvxp\") pod \"00e18877-2928-4039-b2e4-562989a3cdb5\" (UID: \"00e18877-2928-4039-b2e4-562989a3cdb5\") " Jan 27 14:19:47 crc kubenswrapper[4914]: I0127 14:19:47.551381 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00e18877-2928-4039-b2e4-562989a3cdb5-ssh-key-openstack-edpm-ipam\") pod \"00e18877-2928-4039-b2e4-562989a3cdb5\" (UID: \"00e18877-2928-4039-b2e4-562989a3cdb5\") " Jan 27 14:19:47 crc kubenswrapper[4914]: I0127 14:19:47.551559 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00e18877-2928-4039-b2e4-562989a3cdb5-inventory\") pod \"00e18877-2928-4039-b2e4-562989a3cdb5\" (UID: \"00e18877-2928-4039-b2e4-562989a3cdb5\") " Jan 27 14:19:47 crc kubenswrapper[4914]: I0127 14:19:47.557464 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00e18877-2928-4039-b2e4-562989a3cdb5-kube-api-access-zdvxp" (OuterVolumeSpecName: "kube-api-access-zdvxp") pod "00e18877-2928-4039-b2e4-562989a3cdb5" (UID: "00e18877-2928-4039-b2e4-562989a3cdb5"). InnerVolumeSpecName "kube-api-access-zdvxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:19:47 crc kubenswrapper[4914]: I0127 14:19:47.581620 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00e18877-2928-4039-b2e4-562989a3cdb5-inventory" (OuterVolumeSpecName: "inventory") pod "00e18877-2928-4039-b2e4-562989a3cdb5" (UID: "00e18877-2928-4039-b2e4-562989a3cdb5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:19:47 crc kubenswrapper[4914]: I0127 14:19:47.592688 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00e18877-2928-4039-b2e4-562989a3cdb5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "00e18877-2928-4039-b2e4-562989a3cdb5" (UID: "00e18877-2928-4039-b2e4-562989a3cdb5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:19:47 crc kubenswrapper[4914]: I0127 14:19:47.654394 4914 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00e18877-2928-4039-b2e4-562989a3cdb5-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:47 crc kubenswrapper[4914]: I0127 14:19:47.654450 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdvxp\" (UniqueName: \"kubernetes.io/projected/00e18877-2928-4039-b2e4-562989a3cdb5-kube-api-access-zdvxp\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:47 crc kubenswrapper[4914]: I0127 14:19:47.654471 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00e18877-2928-4039-b2e4-562989a3cdb5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:47 crc kubenswrapper[4914]: I0127 14:19:47.994952 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" event={"ID":"00e18877-2928-4039-b2e4-562989a3cdb5","Type":"ContainerDied","Data":"7543da7010439e6d7c5405e15ad4092e6046806aea338d0ca4374568de2b737d"} Jan 27 14:19:47 crc kubenswrapper[4914]: I0127 14:19:47.995270 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7543da7010439e6d7c5405e15ad4092e6046806aea338d0ca4374568de2b737d" Jan 27 14:19:47 crc kubenswrapper[4914]: I0127 14:19:47.995065 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlf59" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.088775 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx"] Jan 27 14:19:48 crc kubenswrapper[4914]: E0127 14:19:48.089267 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00e18877-2928-4039-b2e4-562989a3cdb5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.089312 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="00e18877-2928-4039-b2e4-562989a3cdb5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:19:48 crc kubenswrapper[4914]: E0127 14:19:48.089332 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb2db8e-4bbe-4530-970a-80a67f950e08" containerName="extract-utilities" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.089343 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb2db8e-4bbe-4530-970a-80a67f950e08" containerName="extract-utilities" Jan 27 14:19:48 crc kubenswrapper[4914]: E0127 14:19:48.089384 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb2db8e-4bbe-4530-970a-80a67f950e08" containerName="extract-content" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.089393 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb2db8e-4bbe-4530-970a-80a67f950e08" containerName="extract-content" Jan 27 14:19:48 crc kubenswrapper[4914]: E0127 14:19:48.089411 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb2db8e-4bbe-4530-970a-80a67f950e08" containerName="registry-server" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.089420 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb2db8e-4bbe-4530-970a-80a67f950e08" containerName="registry-server" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.089635 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="00e18877-2928-4039-b2e4-562989a3cdb5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.089656 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cb2db8e-4bbe-4530-970a-80a67f950e08" containerName="registry-server" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.090504 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.093202 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.095135 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.095167 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.096421 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m5jxs" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.101049 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx"] Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.165019 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ff553dd-8799-4ed1-9f38-25e6f481907d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx\" (UID: \"4ff553dd-8799-4ed1-9f38-25e6f481907d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.165167 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hbsb\" (UniqueName: \"kubernetes.io/projected/4ff553dd-8799-4ed1-9f38-25e6f481907d-kube-api-access-6hbsb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx\" (UID: \"4ff553dd-8799-4ed1-9f38-25e6f481907d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.165500 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ff553dd-8799-4ed1-9f38-25e6f481907d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx\" (UID: \"4ff553dd-8799-4ed1-9f38-25e6f481907d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.267373 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ff553dd-8799-4ed1-9f38-25e6f481907d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx\" (UID: \"4ff553dd-8799-4ed1-9f38-25e6f481907d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.267491 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ff553dd-8799-4ed1-9f38-25e6f481907d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx\" (UID: \"4ff553dd-8799-4ed1-9f38-25e6f481907d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.267524 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hbsb\" (UniqueName: \"kubernetes.io/projected/4ff553dd-8799-4ed1-9f38-25e6f481907d-kube-api-access-6hbsb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx\" (UID: \"4ff553dd-8799-4ed1-9f38-25e6f481907d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.271108 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ff553dd-8799-4ed1-9f38-25e6f481907d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx\" (UID: \"4ff553dd-8799-4ed1-9f38-25e6f481907d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.272198 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ff553dd-8799-4ed1-9f38-25e6f481907d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx\" (UID: \"4ff553dd-8799-4ed1-9f38-25e6f481907d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.285424 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hbsb\" (UniqueName: \"kubernetes.io/projected/4ff553dd-8799-4ed1-9f38-25e6f481907d-kube-api-access-6hbsb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx\" (UID: \"4ff553dd-8799-4ed1-9f38-25e6f481907d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.418602 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" Jan 27 14:19:48 crc kubenswrapper[4914]: I0127 14:19:48.931052 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx"] Jan 27 14:19:49 crc kubenswrapper[4914]: I0127 14:19:49.005463 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" event={"ID":"4ff553dd-8799-4ed1-9f38-25e6f481907d","Type":"ContainerStarted","Data":"5eb66da89f91f3baefa5fdcfd086acc1eca7d69f6592a05486c168ab8b2e356a"} Jan 27 14:19:50 crc kubenswrapper[4914]: I0127 14:19:50.013099 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" event={"ID":"4ff553dd-8799-4ed1-9f38-25e6f481907d","Type":"ContainerStarted","Data":"9a60f7af1846d910398729bf6bb8f9d46c0cf1f50ad5ac8689361c1b663bc854"} Jan 27 14:19:50 crc kubenswrapper[4914]: I0127 14:19:50.038617 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" podStartSLOduration=1.578432353 podStartE2EDuration="2.038592464s" podCreationTimestamp="2026-01-27 14:19:48 +0000 UTC" firstStartedPulling="2026-01-27 14:19:48.939094621 +0000 UTC m=+2147.251444706" lastFinishedPulling="2026-01-27 14:19:49.399254732 +0000 UTC m=+2147.711604817" observedRunningTime="2026-01-27 14:19:50.033196426 +0000 UTC m=+2148.345546521" watchObservedRunningTime="2026-01-27 14:19:50.038592464 +0000 UTC m=+2148.350942559" Jan 27 14:20:37 crc kubenswrapper[4914]: I0127 14:20:37.438591 4914 generic.go:334] "Generic (PLEG): container finished" podID="4ff553dd-8799-4ed1-9f38-25e6f481907d" containerID="9a60f7af1846d910398729bf6bb8f9d46c0cf1f50ad5ac8689361c1b663bc854" exitCode=0 Jan 27 14:20:37 crc kubenswrapper[4914]: I0127 14:20:37.438695 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" event={"ID":"4ff553dd-8799-4ed1-9f38-25e6f481907d","Type":"ContainerDied","Data":"9a60f7af1846d910398729bf6bb8f9d46c0cf1f50ad5ac8689361c1b663bc854"} Jan 27 14:20:38 crc kubenswrapper[4914]: I0127 14:20:38.898172 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.038131 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ff553dd-8799-4ed1-9f38-25e6f481907d-ssh-key-openstack-edpm-ipam\") pod \"4ff553dd-8799-4ed1-9f38-25e6f481907d\" (UID: \"4ff553dd-8799-4ed1-9f38-25e6f481907d\") " Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.038259 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hbsb\" (UniqueName: \"kubernetes.io/projected/4ff553dd-8799-4ed1-9f38-25e6f481907d-kube-api-access-6hbsb\") pod \"4ff553dd-8799-4ed1-9f38-25e6f481907d\" (UID: \"4ff553dd-8799-4ed1-9f38-25e6f481907d\") " Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.038293 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ff553dd-8799-4ed1-9f38-25e6f481907d-inventory\") pod \"4ff553dd-8799-4ed1-9f38-25e6f481907d\" (UID: \"4ff553dd-8799-4ed1-9f38-25e6f481907d\") " Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.044660 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff553dd-8799-4ed1-9f38-25e6f481907d-kube-api-access-6hbsb" (OuterVolumeSpecName: "kube-api-access-6hbsb") pod "4ff553dd-8799-4ed1-9f38-25e6f481907d" (UID: "4ff553dd-8799-4ed1-9f38-25e6f481907d"). InnerVolumeSpecName "kube-api-access-6hbsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.069263 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ff553dd-8799-4ed1-9f38-25e6f481907d-inventory" (OuterVolumeSpecName: "inventory") pod "4ff553dd-8799-4ed1-9f38-25e6f481907d" (UID: "4ff553dd-8799-4ed1-9f38-25e6f481907d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.093210 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ff553dd-8799-4ed1-9f38-25e6f481907d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4ff553dd-8799-4ed1-9f38-25e6f481907d" (UID: "4ff553dd-8799-4ed1-9f38-25e6f481907d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.141424 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ff553dd-8799-4ed1-9f38-25e6f481907d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.141475 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hbsb\" (UniqueName: \"kubernetes.io/projected/4ff553dd-8799-4ed1-9f38-25e6f481907d-kube-api-access-6hbsb\") on node \"crc\" DevicePath \"\"" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.141489 4914 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ff553dd-8799-4ed1-9f38-25e6f481907d-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.503937 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" event={"ID":"4ff553dd-8799-4ed1-9f38-25e6f481907d","Type":"ContainerDied","Data":"5eb66da89f91f3baefa5fdcfd086acc1eca7d69f6592a05486c168ab8b2e356a"} Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.504007 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb66da89f91f3baefa5fdcfd086acc1eca7d69f6592a05486c168ab8b2e356a" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.504138 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.574201 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-r2q65"] Jan 27 14:20:39 crc kubenswrapper[4914]: E0127 14:20:39.574732 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff553dd-8799-4ed1-9f38-25e6f481907d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.574753 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff553dd-8799-4ed1-9f38-25e6f481907d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.574983 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ff553dd-8799-4ed1-9f38-25e6f481907d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.575764 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.580496 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.580637 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.582717 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m5jxs" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.584078 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.588044 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-r2q65"] Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.651386 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c529c1e6-5832-42ef-aef0-a67bb6828236-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-r2q65\" (UID: \"c529c1e6-5832-42ef-aef0-a67bb6828236\") " pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.651431 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6s5d\" (UniqueName: \"kubernetes.io/projected/c529c1e6-5832-42ef-aef0-a67bb6828236-kube-api-access-w6s5d\") pod \"ssh-known-hosts-edpm-deployment-r2q65\" (UID: \"c529c1e6-5832-42ef-aef0-a67bb6828236\") " pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.651559 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c529c1e6-5832-42ef-aef0-a67bb6828236-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-r2q65\" (UID: \"c529c1e6-5832-42ef-aef0-a67bb6828236\") " pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.753582 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c529c1e6-5832-42ef-aef0-a67bb6828236-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-r2q65\" (UID: \"c529c1e6-5832-42ef-aef0-a67bb6828236\") " pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.753620 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6s5d\" (UniqueName: \"kubernetes.io/projected/c529c1e6-5832-42ef-aef0-a67bb6828236-kube-api-access-w6s5d\") pod \"ssh-known-hosts-edpm-deployment-r2q65\" (UID: \"c529c1e6-5832-42ef-aef0-a67bb6828236\") " pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.753688 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c529c1e6-5832-42ef-aef0-a67bb6828236-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-r2q65\" (UID: \"c529c1e6-5832-42ef-aef0-a67bb6828236\") " pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.758038 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c529c1e6-5832-42ef-aef0-a67bb6828236-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-r2q65\" (UID: \"c529c1e6-5832-42ef-aef0-a67bb6828236\") " pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.758989 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c529c1e6-5832-42ef-aef0-a67bb6828236-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-r2q65\" (UID: \"c529c1e6-5832-42ef-aef0-a67bb6828236\") " pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.776077 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6s5d\" (UniqueName: \"kubernetes.io/projected/c529c1e6-5832-42ef-aef0-a67bb6828236-kube-api-access-w6s5d\") pod \"ssh-known-hosts-edpm-deployment-r2q65\" (UID: \"c529c1e6-5832-42ef-aef0-a67bb6828236\") " pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" Jan 27 14:20:39 crc kubenswrapper[4914]: I0127 14:20:39.897458 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" Jan 27 14:20:40 crc kubenswrapper[4914]: I0127 14:20:40.410318 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-r2q65"] Jan 27 14:20:40 crc kubenswrapper[4914]: I0127 14:20:40.413819 4914 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:20:40 crc kubenswrapper[4914]: I0127 14:20:40.511957 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" event={"ID":"c529c1e6-5832-42ef-aef0-a67bb6828236","Type":"ContainerStarted","Data":"8094e7cf12fbf6e815c0f5de31e9d737f8762cef63edfbac65dad64e60170f97"} Jan 27 14:20:43 crc kubenswrapper[4914]: I0127 14:20:43.544674 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" event={"ID":"c529c1e6-5832-42ef-aef0-a67bb6828236","Type":"ContainerStarted","Data":"ae5b7c072b1acaa4d9f0622f9937d9ffd076876a76cfd5c2a1e74eacc5b088c6"} Jan 27 14:20:43 crc kubenswrapper[4914]: I0127 14:20:43.562223 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" podStartSLOduration=1.830380746 podStartE2EDuration="4.562194863s" podCreationTimestamp="2026-01-27 14:20:39 +0000 UTC" firstStartedPulling="2026-01-27 14:20:40.413319952 +0000 UTC m=+2198.725670037" lastFinishedPulling="2026-01-27 14:20:43.145134069 +0000 UTC m=+2201.457484154" observedRunningTime="2026-01-27 14:20:43.559363976 +0000 UTC m=+2201.871714101" watchObservedRunningTime="2026-01-27 14:20:43.562194863 +0000 UTC m=+2201.874544988" Jan 27 14:20:50 crc kubenswrapper[4914]: I0127 14:20:50.597808 4914 generic.go:334] "Generic (PLEG): container finished" podID="c529c1e6-5832-42ef-aef0-a67bb6828236" containerID="ae5b7c072b1acaa4d9f0622f9937d9ffd076876a76cfd5c2a1e74eacc5b088c6" exitCode=0 Jan 27 14:20:50 crc kubenswrapper[4914]: I0127 14:20:50.598037 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" event={"ID":"c529c1e6-5832-42ef-aef0-a67bb6828236","Type":"ContainerDied","Data":"ae5b7c072b1acaa4d9f0622f9937d9ffd076876a76cfd5c2a1e74eacc5b088c6"} Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.027217 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.203430 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c529c1e6-5832-42ef-aef0-a67bb6828236-ssh-key-openstack-edpm-ipam\") pod \"c529c1e6-5832-42ef-aef0-a67bb6828236\" (UID: \"c529c1e6-5832-42ef-aef0-a67bb6828236\") " Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.203521 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c529c1e6-5832-42ef-aef0-a67bb6828236-inventory-0\") pod \"c529c1e6-5832-42ef-aef0-a67bb6828236\" (UID: \"c529c1e6-5832-42ef-aef0-a67bb6828236\") " Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.203612 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6s5d\" (UniqueName: \"kubernetes.io/projected/c529c1e6-5832-42ef-aef0-a67bb6828236-kube-api-access-w6s5d\") pod \"c529c1e6-5832-42ef-aef0-a67bb6828236\" (UID: \"c529c1e6-5832-42ef-aef0-a67bb6828236\") " Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.210049 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c529c1e6-5832-42ef-aef0-a67bb6828236-kube-api-access-w6s5d" (OuterVolumeSpecName: "kube-api-access-w6s5d") pod "c529c1e6-5832-42ef-aef0-a67bb6828236" (UID: "c529c1e6-5832-42ef-aef0-a67bb6828236"). InnerVolumeSpecName "kube-api-access-w6s5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.229419 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c529c1e6-5832-42ef-aef0-a67bb6828236-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c529c1e6-5832-42ef-aef0-a67bb6828236" (UID: "c529c1e6-5832-42ef-aef0-a67bb6828236"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.237126 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c529c1e6-5832-42ef-aef0-a67bb6828236-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c529c1e6-5832-42ef-aef0-a67bb6828236" (UID: "c529c1e6-5832-42ef-aef0-a67bb6828236"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.305517 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c529c1e6-5832-42ef-aef0-a67bb6828236-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.305567 4914 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c529c1e6-5832-42ef-aef0-a67bb6828236-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.305582 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6s5d\" (UniqueName: \"kubernetes.io/projected/c529c1e6-5832-42ef-aef0-a67bb6828236-kube-api-access-w6s5d\") on node \"crc\" DevicePath \"\"" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.617638 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" event={"ID":"c529c1e6-5832-42ef-aef0-a67bb6828236","Type":"ContainerDied","Data":"8094e7cf12fbf6e815c0f5de31e9d737f8762cef63edfbac65dad64e60170f97"} Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.617678 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8094e7cf12fbf6e815c0f5de31e9d737f8762cef63edfbac65dad64e60170f97" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.617708 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-r2q65" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.725623 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm"] Jan 27 14:20:52 crc kubenswrapper[4914]: E0127 14:20:52.726113 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c529c1e6-5832-42ef-aef0-a67bb6828236" containerName="ssh-known-hosts-edpm-deployment" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.726134 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="c529c1e6-5832-42ef-aef0-a67bb6828236" containerName="ssh-known-hosts-edpm-deployment" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.726387 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="c529c1e6-5832-42ef-aef0-a67bb6828236" containerName="ssh-known-hosts-edpm-deployment" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.727158 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.730021 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.732256 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m5jxs" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.732256 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.732564 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.740887 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm"] Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.815964 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm6nm\" (UID: \"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.816087 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx7dd\" (UniqueName: \"kubernetes.io/projected/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-kube-api-access-mx7dd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm6nm\" (UID: \"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.816135 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm6nm\" (UID: \"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.918421 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm6nm\" (UID: \"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.918565 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx7dd\" (UniqueName: \"kubernetes.io/projected/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-kube-api-access-mx7dd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm6nm\" (UID: \"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.918602 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm6nm\" (UID: \"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.929928 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm6nm\" (UID: \"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.930025 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm6nm\" (UID: \"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" Jan 27 14:20:52 crc kubenswrapper[4914]: I0127 14:20:52.954086 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx7dd\" (UniqueName: \"kubernetes.io/projected/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-kube-api-access-mx7dd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm6nm\" (UID: \"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" Jan 27 14:20:53 crc kubenswrapper[4914]: I0127 14:20:53.045821 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" Jan 27 14:20:53 crc kubenswrapper[4914]: I0127 14:20:53.582535 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm"] Jan 27 14:20:53 crc kubenswrapper[4914]: I0127 14:20:53.626480 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" event={"ID":"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4","Type":"ContainerStarted","Data":"0adff927569b932d6f10c7a71f45bb258099567f93fa20c070717c1983323feb"} Jan 27 14:20:55 crc kubenswrapper[4914]: I0127 14:20:55.642904 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" event={"ID":"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4","Type":"ContainerStarted","Data":"a58e2671ca7deefb8b270e62625749ff40a605ab15f5d3dbc8fe5268d32616f9"} Jan 27 14:20:55 crc kubenswrapper[4914]: I0127 14:20:55.665364 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" podStartSLOduration=2.955419279 podStartE2EDuration="3.665335917s" podCreationTimestamp="2026-01-27 14:20:52 +0000 UTC" firstStartedPulling="2026-01-27 14:20:53.589090695 +0000 UTC m=+2211.901440780" lastFinishedPulling="2026-01-27 14:20:54.299007333 +0000 UTC m=+2212.611357418" observedRunningTime="2026-01-27 14:20:55.655804907 +0000 UTC m=+2213.968155022" watchObservedRunningTime="2026-01-27 14:20:55.665335917 +0000 UTC m=+2213.977686012" Jan 27 14:21:02 crc kubenswrapper[4914]: I0127 14:21:02.716603 4914 generic.go:334] "Generic (PLEG): container finished" podID="16f74bcf-a553-4fb7-9bbf-be7a617ccbc4" containerID="a58e2671ca7deefb8b270e62625749ff40a605ab15f5d3dbc8fe5268d32616f9" exitCode=0 Jan 27 14:21:02 crc kubenswrapper[4914]: I0127 14:21:02.716848 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" event={"ID":"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4","Type":"ContainerDied","Data":"a58e2671ca7deefb8b270e62625749ff40a605ab15f5d3dbc8fe5268d32616f9"} Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.127096 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.294253 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx7dd\" (UniqueName: \"kubernetes.io/projected/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-kube-api-access-mx7dd\") pod \"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4\" (UID: \"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4\") " Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.294385 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-inventory\") pod \"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4\" (UID: \"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4\") " Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.294534 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-ssh-key-openstack-edpm-ipam\") pod \"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4\" (UID: \"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4\") " Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.301123 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-kube-api-access-mx7dd" (OuterVolumeSpecName: "kube-api-access-mx7dd") pod "16f74bcf-a553-4fb7-9bbf-be7a617ccbc4" (UID: "16f74bcf-a553-4fb7-9bbf-be7a617ccbc4"). InnerVolumeSpecName "kube-api-access-mx7dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.301370 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx7dd\" (UniqueName: \"kubernetes.io/projected/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-kube-api-access-mx7dd\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.321919 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-inventory" (OuterVolumeSpecName: "inventory") pod "16f74bcf-a553-4fb7-9bbf-be7a617ccbc4" (UID: "16f74bcf-a553-4fb7-9bbf-be7a617ccbc4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.335069 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "16f74bcf-a553-4fb7-9bbf-be7a617ccbc4" (UID: "16f74bcf-a553-4fb7-9bbf-be7a617ccbc4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.402792 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.402862 4914 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16f74bcf-a553-4fb7-9bbf-be7a617ccbc4-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.734313 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" event={"ID":"16f74bcf-a553-4fb7-9bbf-be7a617ccbc4","Type":"ContainerDied","Data":"0adff927569b932d6f10c7a71f45bb258099567f93fa20c070717c1983323feb"} Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.734353 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0adff927569b932d6f10c7a71f45bb258099567f93fa20c070717c1983323feb" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.734377 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm6nm" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.826770 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7"] Jan 27 14:21:04 crc kubenswrapper[4914]: E0127 14:21:04.828123 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f74bcf-a553-4fb7-9bbf-be7a617ccbc4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.828210 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f74bcf-a553-4fb7-9bbf-be7a617ccbc4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.828466 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f74bcf-a553-4fb7-9bbf-be7a617ccbc4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.829528 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.831654 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.831932 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.832059 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.835978 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m5jxs" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.838629 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7"] Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.910822 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7\" (UID: \"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.910886 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7\" (UID: \"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" Jan 27 14:21:04 crc kubenswrapper[4914]: I0127 14:21:04.910976 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk9pl\" (UniqueName: \"kubernetes.io/projected/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-kube-api-access-gk9pl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7\" (UID: \"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" Jan 27 14:21:05 crc kubenswrapper[4914]: I0127 14:21:05.012819 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk9pl\" (UniqueName: \"kubernetes.io/projected/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-kube-api-access-gk9pl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7\" (UID: \"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" Jan 27 14:21:05 crc kubenswrapper[4914]: I0127 14:21:05.013017 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7\" (UID: \"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" Jan 27 14:21:05 crc kubenswrapper[4914]: I0127 14:21:05.013048 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7\" (UID: \"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" Jan 27 14:21:05 crc kubenswrapper[4914]: I0127 14:21:05.024641 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7\" (UID: \"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" Jan 27 14:21:05 crc kubenswrapper[4914]: I0127 14:21:05.033500 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7\" (UID: \"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" Jan 27 14:21:05 crc kubenswrapper[4914]: I0127 14:21:05.034186 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk9pl\" (UniqueName: \"kubernetes.io/projected/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-kube-api-access-gk9pl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7\" (UID: \"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" Jan 27 14:21:05 crc kubenswrapper[4914]: I0127 14:21:05.169345 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" Jan 27 14:21:05 crc kubenswrapper[4914]: I0127 14:21:05.683737 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7"] Jan 27 14:21:05 crc kubenswrapper[4914]: W0127 14:21:05.692903 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62c60dfe_c65f_4b26_a21a_a9ace0cc93ee.slice/crio-26b41797706fcc673679044def04e050d79708ed98af838be5c3dcd40a02da40 WatchSource:0}: Error finding container 26b41797706fcc673679044def04e050d79708ed98af838be5c3dcd40a02da40: Status 404 returned error can't find the container with id 26b41797706fcc673679044def04e050d79708ed98af838be5c3dcd40a02da40 Jan 27 14:21:05 crc kubenswrapper[4914]: I0127 14:21:05.744743 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" event={"ID":"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee","Type":"ContainerStarted","Data":"26b41797706fcc673679044def04e050d79708ed98af838be5c3dcd40a02da40"} Jan 27 14:21:06 crc kubenswrapper[4914]: I0127 14:21:06.755826 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" event={"ID":"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee","Type":"ContainerStarted","Data":"9cc4ecf1386f4f8d0e754aa513df5f6aaedcb62a8d5f0ad6033e5fef3e4b5ccf"} Jan 27 14:21:06 crc kubenswrapper[4914]: I0127 14:21:06.781509 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" podStartSLOduration=2.025942857 podStartE2EDuration="2.781484141s" podCreationTimestamp="2026-01-27 14:21:04 +0000 UTC" firstStartedPulling="2026-01-27 14:21:05.697283027 +0000 UTC m=+2224.009633112" lastFinishedPulling="2026-01-27 14:21:06.452824311 +0000 UTC m=+2224.765174396" observedRunningTime="2026-01-27 14:21:06.772407384 +0000 UTC m=+2225.084757499" watchObservedRunningTime="2026-01-27 14:21:06.781484141 +0000 UTC m=+2225.093834246" Jan 27 14:21:07 crc kubenswrapper[4914]: I0127 14:21:07.691183 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:21:07 crc kubenswrapper[4914]: I0127 14:21:07.691277 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:21:16 crc kubenswrapper[4914]: I0127 14:21:16.854555 4914 generic.go:334] "Generic (PLEG): container finished" podID="62c60dfe-c65f-4b26-a21a-a9ace0cc93ee" containerID="9cc4ecf1386f4f8d0e754aa513df5f6aaedcb62a8d5f0ad6033e5fef3e4b5ccf" exitCode=0 Jan 27 14:21:16 crc kubenswrapper[4914]: I0127 14:21:16.854687 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" event={"ID":"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee","Type":"ContainerDied","Data":"9cc4ecf1386f4f8d0e754aa513df5f6aaedcb62a8d5f0ad6033e5fef3e4b5ccf"} Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.302103 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.470749 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-ssh-key-openstack-edpm-ipam\") pod \"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee\" (UID: \"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee\") " Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.470898 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk9pl\" (UniqueName: \"kubernetes.io/projected/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-kube-api-access-gk9pl\") pod \"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee\" (UID: \"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee\") " Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.471001 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-inventory\") pod \"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee\" (UID: \"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee\") " Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.476115 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-kube-api-access-gk9pl" (OuterVolumeSpecName: "kube-api-access-gk9pl") pod "62c60dfe-c65f-4b26-a21a-a9ace0cc93ee" (UID: "62c60dfe-c65f-4b26-a21a-a9ace0cc93ee"). InnerVolumeSpecName "kube-api-access-gk9pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.497392 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "62c60dfe-c65f-4b26-a21a-a9ace0cc93ee" (UID: "62c60dfe-c65f-4b26-a21a-a9ace0cc93ee"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.501282 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-inventory" (OuterVolumeSpecName: "inventory") pod "62c60dfe-c65f-4b26-a21a-a9ace0cc93ee" (UID: "62c60dfe-c65f-4b26-a21a-a9ace0cc93ee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.573401 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.573439 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk9pl\" (UniqueName: \"kubernetes.io/projected/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-kube-api-access-gk9pl\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.573448 4914 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62c60dfe-c65f-4b26-a21a-a9ace0cc93ee-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.872538 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" event={"ID":"62c60dfe-c65f-4b26-a21a-a9ace0cc93ee","Type":"ContainerDied","Data":"26b41797706fcc673679044def04e050d79708ed98af838be5c3dcd40a02da40"} Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.872590 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.872599 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26b41797706fcc673679044def04e050d79708ed98af838be5c3dcd40a02da40" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.959551 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d"] Jan 27 14:21:18 crc kubenswrapper[4914]: E0127 14:21:18.960040 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c60dfe-c65f-4b26-a21a-a9ace0cc93ee" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.960077 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c60dfe-c65f-4b26-a21a-a9ace0cc93ee" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.960288 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c60dfe-c65f-4b26-a21a-a9ace0cc93ee" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.960936 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.963489 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.964562 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.964657 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.964975 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.965070 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.965183 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m5jxs" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.965547 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.965696 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:21:18 crc kubenswrapper[4914]: I0127 14:21:18.974677 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d"] Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.084067 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.084270 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.084364 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.084509 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.084558 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.084662 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.084908 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.085016 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.085293 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.085409 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.085473 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.085516 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.085544 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vngb\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-kube-api-access-7vngb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.085596 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.195396 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.195450 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.195496 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.195552 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.195577 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vngb\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-kube-api-access-7vngb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.195659 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.195685 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.195748 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.195794 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.195866 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.195898 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.195966 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.196033 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.196059 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.200279 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.200856 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.201135 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.202423 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.202868 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.204342 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.212469 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.213008 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.214730 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.215702 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.216649 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.217368 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.218622 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vngb\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-kube-api-access-7vngb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.219152 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.278466 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.806038 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d"] Jan 27 14:21:19 crc kubenswrapper[4914]: I0127 14:21:19.881663 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" event={"ID":"ecda27a9-3b98-4f3e-9f06-5f8e46af202f","Type":"ContainerStarted","Data":"db93bb0b3d0fb8dde1d638fe86a7f4d369e72c988b1fd43a4ffaa0e735d4edd1"} Jan 27 14:21:20 crc kubenswrapper[4914]: I0127 14:21:20.890373 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" event={"ID":"ecda27a9-3b98-4f3e-9f06-5f8e46af202f","Type":"ContainerStarted","Data":"6538c32702d1a63c43ad24d17eb3760b0599a6a5792ac65694658ef890e33723"} Jan 27 14:21:20 crc kubenswrapper[4914]: I0127 14:21:20.918778 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" podStartSLOduration=2.110359104 podStartE2EDuration="2.91875521s" podCreationTimestamp="2026-01-27 14:21:18 +0000 UTC" firstStartedPulling="2026-01-27 14:21:19.810694294 +0000 UTC m=+2238.123044379" lastFinishedPulling="2026-01-27 14:21:20.6190904 +0000 UTC m=+2238.931440485" observedRunningTime="2026-01-27 14:21:20.909198339 +0000 UTC m=+2239.221548434" watchObservedRunningTime="2026-01-27 14:21:20.91875521 +0000 UTC m=+2239.231105305" Jan 27 14:21:37 crc kubenswrapper[4914]: I0127 14:21:37.690717 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:21:37 crc kubenswrapper[4914]: I0127 14:21:37.691288 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:21:57 crc kubenswrapper[4914]: I0127 14:21:57.536111 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-6669b7ffb9-n8php" podUID="1b9b723f-e648-4f12-86f7-d453e000a46e" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 27 14:21:57 crc kubenswrapper[4914]: I0127 14:21:57.689220 4914 generic.go:334] "Generic (PLEG): container finished" podID="ecda27a9-3b98-4f3e-9f06-5f8e46af202f" containerID="6538c32702d1a63c43ad24d17eb3760b0599a6a5792ac65694658ef890e33723" exitCode=0 Jan 27 14:21:57 crc kubenswrapper[4914]: I0127 14:21:57.689268 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" event={"ID":"ecda27a9-3b98-4f3e-9f06-5f8e46af202f","Type":"ContainerDied","Data":"6538c32702d1a63c43ad24d17eb3760b0599a6a5792ac65694658ef890e33723"} Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.093087 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.222748 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-nova-combined-ca-bundle\") pod \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.222824 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vngb\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-kube-api-access-7vngb\") pod \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.222881 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-ssh-key-openstack-edpm-ipam\") pod \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.222910 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.222944 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-ovn-combined-ca-bundle\") pod \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.224159 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.224217 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-repo-setup-combined-ca-bundle\") pod \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.224247 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-inventory\") pod \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.224328 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-libvirt-combined-ca-bundle\") pod \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.224352 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.224465 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.225010 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-neutron-metadata-combined-ca-bundle\") pod \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.225055 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-telemetry-combined-ca-bundle\") pod \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.225095 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-bootstrap-combined-ca-bundle\") pod \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\" (UID: \"ecda27a9-3b98-4f3e-9f06-5f8e46af202f\") " Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.230948 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ecda27a9-3b98-4f3e-9f06-5f8e46af202f" (UID: "ecda27a9-3b98-4f3e-9f06-5f8e46af202f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.232723 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ecda27a9-3b98-4f3e-9f06-5f8e46af202f" (UID: "ecda27a9-3b98-4f3e-9f06-5f8e46af202f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.232760 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "ecda27a9-3b98-4f3e-9f06-5f8e46af202f" (UID: "ecda27a9-3b98-4f3e-9f06-5f8e46af202f"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.232850 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ecda27a9-3b98-4f3e-9f06-5f8e46af202f" (UID: "ecda27a9-3b98-4f3e-9f06-5f8e46af202f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.234181 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ecda27a9-3b98-4f3e-9f06-5f8e46af202f" (UID: "ecda27a9-3b98-4f3e-9f06-5f8e46af202f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.234993 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ecda27a9-3b98-4f3e-9f06-5f8e46af202f" (UID: "ecda27a9-3b98-4f3e-9f06-5f8e46af202f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.235146 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "ecda27a9-3b98-4f3e-9f06-5f8e46af202f" (UID: "ecda27a9-3b98-4f3e-9f06-5f8e46af202f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.235239 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-kube-api-access-7vngb" (OuterVolumeSpecName: "kube-api-access-7vngb") pod "ecda27a9-3b98-4f3e-9f06-5f8e46af202f" (UID: "ecda27a9-3b98-4f3e-9f06-5f8e46af202f"). InnerVolumeSpecName "kube-api-access-7vngb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.235554 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ecda27a9-3b98-4f3e-9f06-5f8e46af202f" (UID: "ecda27a9-3b98-4f3e-9f06-5f8e46af202f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.237067 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "ecda27a9-3b98-4f3e-9f06-5f8e46af202f" (UID: "ecda27a9-3b98-4f3e-9f06-5f8e46af202f"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.238327 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ecda27a9-3b98-4f3e-9f06-5f8e46af202f" (UID: "ecda27a9-3b98-4f3e-9f06-5f8e46af202f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.245438 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "ecda27a9-3b98-4f3e-9f06-5f8e46af202f" (UID: "ecda27a9-3b98-4f3e-9f06-5f8e46af202f"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.257925 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-inventory" (OuterVolumeSpecName: "inventory") pod "ecda27a9-3b98-4f3e-9f06-5f8e46af202f" (UID: "ecda27a9-3b98-4f3e-9f06-5f8e46af202f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.267992 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ecda27a9-3b98-4f3e-9f06-5f8e46af202f" (UID: "ecda27a9-3b98-4f3e-9f06-5f8e46af202f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.328759 4914 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.329112 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vngb\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-kube-api-access-7vngb\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.329265 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.329383 4914 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.329516 4914 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.329659 4914 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.329785 4914 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.329940 4914 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.330069 4914 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.330275 4914 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.330416 4914 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.330606 4914 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.330814 4914 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.330988 4914 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecda27a9-3b98-4f3e-9f06-5f8e46af202f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.706371 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" event={"ID":"ecda27a9-3b98-4f3e-9f06-5f8e46af202f","Type":"ContainerDied","Data":"db93bb0b3d0fb8dde1d638fe86a7f4d369e72c988b1fd43a4ffaa0e735d4edd1"} Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.706414 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db93bb0b3d0fb8dde1d638fe86a7f4d369e72c988b1fd43a4ffaa0e735d4edd1" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.706465 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.804967 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf"] Jan 27 14:21:59 crc kubenswrapper[4914]: E0127 14:21:59.805618 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecda27a9-3b98-4f3e-9f06-5f8e46af202f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.805706 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecda27a9-3b98-4f3e-9f06-5f8e46af202f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.806016 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecda27a9-3b98-4f3e-9f06-5f8e46af202f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.806948 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.812173 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.812320 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.812396 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m5jxs" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.812474 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.812512 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.822378 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf"] Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.945571 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85vfp\" (UniqueName: \"kubernetes.io/projected/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-kube-api-access-85vfp\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9tpnf\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.945915 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9tpnf\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.946079 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9tpnf\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.946237 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9tpnf\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:21:59 crc kubenswrapper[4914]: I0127 14:21:59.946587 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9tpnf\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:22:00 crc kubenswrapper[4914]: I0127 14:22:00.048330 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85vfp\" (UniqueName: \"kubernetes.io/projected/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-kube-api-access-85vfp\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9tpnf\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:22:00 crc kubenswrapper[4914]: I0127 14:22:00.048390 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9tpnf\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:22:00 crc kubenswrapper[4914]: I0127 14:22:00.048434 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9tpnf\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:22:00 crc kubenswrapper[4914]: I0127 14:22:00.048484 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9tpnf\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:22:00 crc kubenswrapper[4914]: I0127 14:22:00.048545 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9tpnf\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:22:00 crc kubenswrapper[4914]: I0127 14:22:00.049972 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9tpnf\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:22:00 crc kubenswrapper[4914]: I0127 14:22:00.053201 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9tpnf\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:22:00 crc kubenswrapper[4914]: I0127 14:22:00.053452 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9tpnf\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:22:00 crc kubenswrapper[4914]: I0127 14:22:00.059336 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9tpnf\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:22:00 crc kubenswrapper[4914]: I0127 14:22:00.069490 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85vfp\" (UniqueName: \"kubernetes.io/projected/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-kube-api-access-85vfp\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9tpnf\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:22:00 crc kubenswrapper[4914]: I0127 14:22:00.135947 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:22:00 crc kubenswrapper[4914]: I0127 14:22:00.663475 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf"] Jan 27 14:22:00 crc kubenswrapper[4914]: I0127 14:22:00.713848 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" event={"ID":"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6","Type":"ContainerStarted","Data":"442a28595aec1626e0ff1f83cb91f2ce805666640fdf4ba6ab625ce0dfc501d3"} Jan 27 14:22:01 crc kubenswrapper[4914]: I0127 14:22:01.722805 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" event={"ID":"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6","Type":"ContainerStarted","Data":"899fbe52f09758f62d371d263b7a6addeea6ea11f20633bda920c877831dc9e8"} Jan 27 14:22:01 crc kubenswrapper[4914]: I0127 14:22:01.758574 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" podStartSLOduration=2.293080969 podStartE2EDuration="2.75855344s" podCreationTimestamp="2026-01-27 14:21:59 +0000 UTC" firstStartedPulling="2026-01-27 14:22:00.67022293 +0000 UTC m=+2278.982573015" lastFinishedPulling="2026-01-27 14:22:01.135695401 +0000 UTC m=+2279.448045486" observedRunningTime="2026-01-27 14:22:01.745999137 +0000 UTC m=+2280.058349222" watchObservedRunningTime="2026-01-27 14:22:01.75855344 +0000 UTC m=+2280.070903525" Jan 27 14:22:07 crc kubenswrapper[4914]: I0127 14:22:07.691463 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:22:07 crc kubenswrapper[4914]: I0127 14:22:07.692009 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:22:07 crc kubenswrapper[4914]: I0127 14:22:07.692054 4914 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 14:22:07 crc kubenswrapper[4914]: I0127 14:22:07.692534 4914 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc"} pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:22:07 crc kubenswrapper[4914]: I0127 14:22:07.692602 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" containerID="cri-o://bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" gracePeriod=600 Jan 27 14:22:07 crc kubenswrapper[4914]: E0127 14:22:07.815258 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:22:08 crc kubenswrapper[4914]: I0127 14:22:08.788156 4914 generic.go:334] "Generic (PLEG): container finished" podID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" exitCode=0 Jan 27 14:22:08 crc kubenswrapper[4914]: I0127 14:22:08.788207 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerDied","Data":"bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc"} Jan 27 14:22:08 crc kubenswrapper[4914]: I0127 14:22:08.788244 4914 scope.go:117] "RemoveContainer" containerID="f659cd5c9a3ab8758da4b24efeed5972f9d7d7fb86a73f395650bf561d77e063" Jan 27 14:22:08 crc kubenswrapper[4914]: I0127 14:22:08.788941 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:22:08 crc kubenswrapper[4914]: E0127 14:22:08.789211 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:22:22 crc kubenswrapper[4914]: I0127 14:22:22.302051 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:22:22 crc kubenswrapper[4914]: E0127 14:22:22.302887 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:22:34 crc kubenswrapper[4914]: I0127 14:22:34.294634 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:22:34 crc kubenswrapper[4914]: E0127 14:22:34.296598 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:22:40 crc kubenswrapper[4914]: I0127 14:22:40.328647 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sx25k"] Jan 27 14:22:40 crc kubenswrapper[4914]: I0127 14:22:40.332403 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sx25k"] Jan 27 14:22:40 crc kubenswrapper[4914]: I0127 14:22:40.332516 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:40 crc kubenswrapper[4914]: I0127 14:22:40.483093 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89zdg\" (UniqueName: \"kubernetes.io/projected/981c625d-0b9b-4128-942d-11cbf8fe23c2-kube-api-access-89zdg\") pod \"community-operators-sx25k\" (UID: \"981c625d-0b9b-4128-942d-11cbf8fe23c2\") " pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:40 crc kubenswrapper[4914]: I0127 14:22:40.485324 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/981c625d-0b9b-4128-942d-11cbf8fe23c2-utilities\") pod \"community-operators-sx25k\" (UID: \"981c625d-0b9b-4128-942d-11cbf8fe23c2\") " pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:40 crc kubenswrapper[4914]: I0127 14:22:40.485643 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/981c625d-0b9b-4128-942d-11cbf8fe23c2-catalog-content\") pod \"community-operators-sx25k\" (UID: \"981c625d-0b9b-4128-942d-11cbf8fe23c2\") " pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:40 crc kubenswrapper[4914]: I0127 14:22:40.587201 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89zdg\" (UniqueName: \"kubernetes.io/projected/981c625d-0b9b-4128-942d-11cbf8fe23c2-kube-api-access-89zdg\") pod \"community-operators-sx25k\" (UID: \"981c625d-0b9b-4128-942d-11cbf8fe23c2\") " pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:40 crc kubenswrapper[4914]: I0127 14:22:40.587387 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/981c625d-0b9b-4128-942d-11cbf8fe23c2-utilities\") pod \"community-operators-sx25k\" (UID: \"981c625d-0b9b-4128-942d-11cbf8fe23c2\") " pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:40 crc kubenswrapper[4914]: I0127 14:22:40.587406 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/981c625d-0b9b-4128-942d-11cbf8fe23c2-catalog-content\") pod \"community-operators-sx25k\" (UID: \"981c625d-0b9b-4128-942d-11cbf8fe23c2\") " pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:40 crc kubenswrapper[4914]: I0127 14:22:40.587884 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/981c625d-0b9b-4128-942d-11cbf8fe23c2-utilities\") pod \"community-operators-sx25k\" (UID: \"981c625d-0b9b-4128-942d-11cbf8fe23c2\") " pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:40 crc kubenswrapper[4914]: I0127 14:22:40.588126 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/981c625d-0b9b-4128-942d-11cbf8fe23c2-catalog-content\") pod \"community-operators-sx25k\" (UID: \"981c625d-0b9b-4128-942d-11cbf8fe23c2\") " pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:40 crc kubenswrapper[4914]: I0127 14:22:40.607637 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89zdg\" (UniqueName: \"kubernetes.io/projected/981c625d-0b9b-4128-942d-11cbf8fe23c2-kube-api-access-89zdg\") pod \"community-operators-sx25k\" (UID: \"981c625d-0b9b-4128-942d-11cbf8fe23c2\") " pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:40 crc kubenswrapper[4914]: I0127 14:22:40.659267 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:41 crc kubenswrapper[4914]: I0127 14:22:41.244628 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sx25k"] Jan 27 14:22:42 crc kubenswrapper[4914]: I0127 14:22:42.068309 4914 generic.go:334] "Generic (PLEG): container finished" podID="981c625d-0b9b-4128-942d-11cbf8fe23c2" containerID="b9100b6ec62ed44c54f3e6cbb567e628cf15b87435a9022cde680bb86a1dcf94" exitCode=0 Jan 27 14:22:42 crc kubenswrapper[4914]: I0127 14:22:42.068375 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx25k" event={"ID":"981c625d-0b9b-4128-942d-11cbf8fe23c2","Type":"ContainerDied","Data":"b9100b6ec62ed44c54f3e6cbb567e628cf15b87435a9022cde680bb86a1dcf94"} Jan 27 14:22:42 crc kubenswrapper[4914]: I0127 14:22:42.068688 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx25k" event={"ID":"981c625d-0b9b-4128-942d-11cbf8fe23c2","Type":"ContainerStarted","Data":"44ccdc76d1d301fe8f0596272e468b983a2f8f0d311f1ba9adf2eb87c3bc4b09"} Jan 27 14:22:43 crc kubenswrapper[4914]: I0127 14:22:43.078603 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx25k" event={"ID":"981c625d-0b9b-4128-942d-11cbf8fe23c2","Type":"ContainerStarted","Data":"a18ffef91390662f93f63ba22c604bb11f39af0557517912c9bf7dea80300306"} Jan 27 14:22:44 crc kubenswrapper[4914]: I0127 14:22:44.090233 4914 generic.go:334] "Generic (PLEG): container finished" podID="981c625d-0b9b-4128-942d-11cbf8fe23c2" containerID="a18ffef91390662f93f63ba22c604bb11f39af0557517912c9bf7dea80300306" exitCode=0 Jan 27 14:22:44 crc kubenswrapper[4914]: I0127 14:22:44.090383 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx25k" event={"ID":"981c625d-0b9b-4128-942d-11cbf8fe23c2","Type":"ContainerDied","Data":"a18ffef91390662f93f63ba22c604bb11f39af0557517912c9bf7dea80300306"} Jan 27 14:22:45 crc kubenswrapper[4914]: I0127 14:22:45.102442 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx25k" event={"ID":"981c625d-0b9b-4128-942d-11cbf8fe23c2","Type":"ContainerStarted","Data":"2192aee3217a4f7f18bb66103d249f05198429b9476a6d8233bbbfe0857a1eaf"} Jan 27 14:22:45 crc kubenswrapper[4914]: I0127 14:22:45.783536 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sx25k" podStartSLOduration=3.088362558 podStartE2EDuration="5.783513637s" podCreationTimestamp="2026-01-27 14:22:40 +0000 UTC" firstStartedPulling="2026-01-27 14:22:42.072172908 +0000 UTC m=+2320.384522983" lastFinishedPulling="2026-01-27 14:22:44.767323977 +0000 UTC m=+2323.079674062" observedRunningTime="2026-01-27 14:22:45.745234111 +0000 UTC m=+2324.057584236" watchObservedRunningTime="2026-01-27 14:22:45.783513637 +0000 UTC m=+2324.095863722" Jan 27 14:22:48 crc kubenswrapper[4914]: I0127 14:22:48.295414 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:22:48 crc kubenswrapper[4914]: E0127 14:22:48.295969 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:22:50 crc kubenswrapper[4914]: I0127 14:22:50.659956 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:50 crc kubenswrapper[4914]: I0127 14:22:50.660226 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:50 crc kubenswrapper[4914]: I0127 14:22:50.707108 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:51 crc kubenswrapper[4914]: I0127 14:22:51.191823 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:51 crc kubenswrapper[4914]: I0127 14:22:51.246812 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sx25k"] Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.164663 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sx25k" podUID="981c625d-0b9b-4128-942d-11cbf8fe23c2" containerName="registry-server" containerID="cri-o://2192aee3217a4f7f18bb66103d249f05198429b9476a6d8233bbbfe0857a1eaf" gracePeriod=2 Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.486046 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wz9xz"] Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.488362 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.497536 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz9xz"] Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.607321 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a71458c-6815-4c11-94b0-a16d029821e3-catalog-content\") pod \"redhat-marketplace-wz9xz\" (UID: \"0a71458c-6815-4c11-94b0-a16d029821e3\") " pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.607466 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlzr4\" (UniqueName: \"kubernetes.io/projected/0a71458c-6815-4c11-94b0-a16d029821e3-kube-api-access-zlzr4\") pod \"redhat-marketplace-wz9xz\" (UID: \"0a71458c-6815-4c11-94b0-a16d029821e3\") " pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.607524 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a71458c-6815-4c11-94b0-a16d029821e3-utilities\") pod \"redhat-marketplace-wz9xz\" (UID: \"0a71458c-6815-4c11-94b0-a16d029821e3\") " pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.641913 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.710093 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlzr4\" (UniqueName: \"kubernetes.io/projected/0a71458c-6815-4c11-94b0-a16d029821e3-kube-api-access-zlzr4\") pod \"redhat-marketplace-wz9xz\" (UID: \"0a71458c-6815-4c11-94b0-a16d029821e3\") " pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.710228 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a71458c-6815-4c11-94b0-a16d029821e3-utilities\") pod \"redhat-marketplace-wz9xz\" (UID: \"0a71458c-6815-4c11-94b0-a16d029821e3\") " pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.710346 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a71458c-6815-4c11-94b0-a16d029821e3-catalog-content\") pod \"redhat-marketplace-wz9xz\" (UID: \"0a71458c-6815-4c11-94b0-a16d029821e3\") " pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.710861 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a71458c-6815-4c11-94b0-a16d029821e3-catalog-content\") pod \"redhat-marketplace-wz9xz\" (UID: \"0a71458c-6815-4c11-94b0-a16d029821e3\") " pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.711106 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a71458c-6815-4c11-94b0-a16d029821e3-utilities\") pod \"redhat-marketplace-wz9xz\" (UID: \"0a71458c-6815-4c11-94b0-a16d029821e3\") " pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.730399 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlzr4\" (UniqueName: \"kubernetes.io/projected/0a71458c-6815-4c11-94b0-a16d029821e3-kube-api-access-zlzr4\") pod \"redhat-marketplace-wz9xz\" (UID: \"0a71458c-6815-4c11-94b0-a16d029821e3\") " pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.812032 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/981c625d-0b9b-4128-942d-11cbf8fe23c2-utilities\") pod \"981c625d-0b9b-4128-942d-11cbf8fe23c2\" (UID: \"981c625d-0b9b-4128-942d-11cbf8fe23c2\") " Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.812286 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89zdg\" (UniqueName: \"kubernetes.io/projected/981c625d-0b9b-4128-942d-11cbf8fe23c2-kube-api-access-89zdg\") pod \"981c625d-0b9b-4128-942d-11cbf8fe23c2\" (UID: \"981c625d-0b9b-4128-942d-11cbf8fe23c2\") " Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.812353 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/981c625d-0b9b-4128-942d-11cbf8fe23c2-catalog-content\") pod \"981c625d-0b9b-4128-942d-11cbf8fe23c2\" (UID: \"981c625d-0b9b-4128-942d-11cbf8fe23c2\") " Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.818432 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.818784 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/981c625d-0b9b-4128-942d-11cbf8fe23c2-utilities" (OuterVolumeSpecName: "utilities") pod "981c625d-0b9b-4128-942d-11cbf8fe23c2" (UID: "981c625d-0b9b-4128-942d-11cbf8fe23c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.824092 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981c625d-0b9b-4128-942d-11cbf8fe23c2-kube-api-access-89zdg" (OuterVolumeSpecName: "kube-api-access-89zdg") pod "981c625d-0b9b-4128-942d-11cbf8fe23c2" (UID: "981c625d-0b9b-4128-942d-11cbf8fe23c2"). InnerVolumeSpecName "kube-api-access-89zdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.914941 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/981c625d-0b9b-4128-942d-11cbf8fe23c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:22:53 crc kubenswrapper[4914]: I0127 14:22:53.915264 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89zdg\" (UniqueName: \"kubernetes.io/projected/981c625d-0b9b-4128-942d-11cbf8fe23c2-kube-api-access-89zdg\") on node \"crc\" DevicePath \"\"" Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.115691 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz9xz"] Jan 27 14:22:54 crc kubenswrapper[4914]: W0127 14:22:54.119966 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a71458c_6815_4c11_94b0_a16d029821e3.slice/crio-2f468c69679cdeff0ff11c5eea9488dcd9cd9ad4de3a0eddb70d77ad4faf8871 WatchSource:0}: Error finding container 2f468c69679cdeff0ff11c5eea9488dcd9cd9ad4de3a0eddb70d77ad4faf8871: Status 404 returned error can't find the container with id 2f468c69679cdeff0ff11c5eea9488dcd9cd9ad4de3a0eddb70d77ad4faf8871 Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.176681 4914 generic.go:334] "Generic (PLEG): container finished" podID="981c625d-0b9b-4128-942d-11cbf8fe23c2" containerID="2192aee3217a4f7f18bb66103d249f05198429b9476a6d8233bbbfe0857a1eaf" exitCode=0 Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.176770 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx25k" Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.176780 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx25k" event={"ID":"981c625d-0b9b-4128-942d-11cbf8fe23c2","Type":"ContainerDied","Data":"2192aee3217a4f7f18bb66103d249f05198429b9476a6d8233bbbfe0857a1eaf"} Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.176811 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx25k" event={"ID":"981c625d-0b9b-4128-942d-11cbf8fe23c2","Type":"ContainerDied","Data":"44ccdc76d1d301fe8f0596272e468b983a2f8f0d311f1ba9adf2eb87c3bc4b09"} Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.176846 4914 scope.go:117] "RemoveContainer" containerID="2192aee3217a4f7f18bb66103d249f05198429b9476a6d8233bbbfe0857a1eaf" Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.178981 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz9xz" event={"ID":"0a71458c-6815-4c11-94b0-a16d029821e3","Type":"ContainerStarted","Data":"2f468c69679cdeff0ff11c5eea9488dcd9cd9ad4de3a0eddb70d77ad4faf8871"} Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.203945 4914 scope.go:117] "RemoveContainer" containerID="a18ffef91390662f93f63ba22c604bb11f39af0557517912c9bf7dea80300306" Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.228895 4914 scope.go:117] "RemoveContainer" containerID="b9100b6ec62ed44c54f3e6cbb567e628cf15b87435a9022cde680bb86a1dcf94" Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.256426 4914 scope.go:117] "RemoveContainer" containerID="2192aee3217a4f7f18bb66103d249f05198429b9476a6d8233bbbfe0857a1eaf" Jan 27 14:22:54 crc kubenswrapper[4914]: E0127 14:22:54.256864 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2192aee3217a4f7f18bb66103d249f05198429b9476a6d8233bbbfe0857a1eaf\": container with ID starting with 2192aee3217a4f7f18bb66103d249f05198429b9476a6d8233bbbfe0857a1eaf not found: ID does not exist" containerID="2192aee3217a4f7f18bb66103d249f05198429b9476a6d8233bbbfe0857a1eaf" Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.256911 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2192aee3217a4f7f18bb66103d249f05198429b9476a6d8233bbbfe0857a1eaf"} err="failed to get container status \"2192aee3217a4f7f18bb66103d249f05198429b9476a6d8233bbbfe0857a1eaf\": rpc error: code = NotFound desc = could not find container \"2192aee3217a4f7f18bb66103d249f05198429b9476a6d8233bbbfe0857a1eaf\": container with ID starting with 2192aee3217a4f7f18bb66103d249f05198429b9476a6d8233bbbfe0857a1eaf not found: ID does not exist" Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.256938 4914 scope.go:117] "RemoveContainer" containerID="a18ffef91390662f93f63ba22c604bb11f39af0557517912c9bf7dea80300306" Jan 27 14:22:54 crc kubenswrapper[4914]: E0127 14:22:54.257308 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a18ffef91390662f93f63ba22c604bb11f39af0557517912c9bf7dea80300306\": container with ID starting with a18ffef91390662f93f63ba22c604bb11f39af0557517912c9bf7dea80300306 not found: ID does not exist" containerID="a18ffef91390662f93f63ba22c604bb11f39af0557517912c9bf7dea80300306" Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.257361 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a18ffef91390662f93f63ba22c604bb11f39af0557517912c9bf7dea80300306"} err="failed to get container status \"a18ffef91390662f93f63ba22c604bb11f39af0557517912c9bf7dea80300306\": rpc error: code = NotFound desc = could not find container \"a18ffef91390662f93f63ba22c604bb11f39af0557517912c9bf7dea80300306\": container with ID starting with a18ffef91390662f93f63ba22c604bb11f39af0557517912c9bf7dea80300306 not found: ID does not exist" Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.257391 4914 scope.go:117] "RemoveContainer" containerID="b9100b6ec62ed44c54f3e6cbb567e628cf15b87435a9022cde680bb86a1dcf94" Jan 27 14:22:54 crc kubenswrapper[4914]: E0127 14:22:54.257924 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9100b6ec62ed44c54f3e6cbb567e628cf15b87435a9022cde680bb86a1dcf94\": container with ID starting with b9100b6ec62ed44c54f3e6cbb567e628cf15b87435a9022cde680bb86a1dcf94 not found: ID does not exist" containerID="b9100b6ec62ed44c54f3e6cbb567e628cf15b87435a9022cde680bb86a1dcf94" Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.257958 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9100b6ec62ed44c54f3e6cbb567e628cf15b87435a9022cde680bb86a1dcf94"} err="failed to get container status \"b9100b6ec62ed44c54f3e6cbb567e628cf15b87435a9022cde680bb86a1dcf94\": rpc error: code = NotFound desc = could not find container \"b9100b6ec62ed44c54f3e6cbb567e628cf15b87435a9022cde680bb86a1dcf94\": container with ID starting with b9100b6ec62ed44c54f3e6cbb567e628cf15b87435a9022cde680bb86a1dcf94 not found: ID does not exist" Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.483871 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/981c625d-0b9b-4128-942d-11cbf8fe23c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "981c625d-0b9b-4128-942d-11cbf8fe23c2" (UID: "981c625d-0b9b-4128-942d-11cbf8fe23c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.527046 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/981c625d-0b9b-4128-942d-11cbf8fe23c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.820307 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sx25k"] Jan 27 14:22:54 crc kubenswrapper[4914]: I0127 14:22:54.829474 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sx25k"] Jan 27 14:22:55 crc kubenswrapper[4914]: I0127 14:22:55.189979 4914 generic.go:334] "Generic (PLEG): container finished" podID="0a71458c-6815-4c11-94b0-a16d029821e3" containerID="ab1d9fe3c162bd71aad7a15a5a292b590ec356dd5195782dd6696c2bbb11cd39" exitCode=0 Jan 27 14:22:55 crc kubenswrapper[4914]: I0127 14:22:55.190037 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz9xz" event={"ID":"0a71458c-6815-4c11-94b0-a16d029821e3","Type":"ContainerDied","Data":"ab1d9fe3c162bd71aad7a15a5a292b590ec356dd5195782dd6696c2bbb11cd39"} Jan 27 14:22:56 crc kubenswrapper[4914]: I0127 14:22:56.305272 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="981c625d-0b9b-4128-942d-11cbf8fe23c2" path="/var/lib/kubelet/pods/981c625d-0b9b-4128-942d-11cbf8fe23c2/volumes" Jan 27 14:22:57 crc kubenswrapper[4914]: I0127 14:22:57.229595 4914 generic.go:334] "Generic (PLEG): container finished" podID="0a71458c-6815-4c11-94b0-a16d029821e3" containerID="a926df0d18121a52c9007d4e895b4e083bcee2a4246763438e4d857ee61069ab" exitCode=0 Jan 27 14:22:57 crc kubenswrapper[4914]: I0127 14:22:57.229655 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz9xz" event={"ID":"0a71458c-6815-4c11-94b0-a16d029821e3","Type":"ContainerDied","Data":"a926df0d18121a52c9007d4e895b4e083bcee2a4246763438e4d857ee61069ab"} Jan 27 14:22:58 crc kubenswrapper[4914]: I0127 14:22:58.241131 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz9xz" event={"ID":"0a71458c-6815-4c11-94b0-a16d029821e3","Type":"ContainerStarted","Data":"f1c66b2afde081d85af0134ac4a800ad15a477911a3439fae1a931b2196b1a6e"} Jan 27 14:22:58 crc kubenswrapper[4914]: I0127 14:22:58.262590 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wz9xz" podStartSLOduration=2.7426479280000002 podStartE2EDuration="5.262565982s" podCreationTimestamp="2026-01-27 14:22:53 +0000 UTC" firstStartedPulling="2026-01-27 14:22:55.191628342 +0000 UTC m=+2333.503978427" lastFinishedPulling="2026-01-27 14:22:57.711546396 +0000 UTC m=+2336.023896481" observedRunningTime="2026-01-27 14:22:58.259265732 +0000 UTC m=+2336.571615817" watchObservedRunningTime="2026-01-27 14:22:58.262565982 +0000 UTC m=+2336.574916067" Jan 27 14:22:59 crc kubenswrapper[4914]: I0127 14:22:59.294430 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:22:59 crc kubenswrapper[4914]: E0127 14:22:59.294742 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:23:03 crc kubenswrapper[4914]: I0127 14:23:03.288897 4914 generic.go:334] "Generic (PLEG): container finished" podID="efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6" containerID="899fbe52f09758f62d371d263b7a6addeea6ea11f20633bda920c877831dc9e8" exitCode=0 Jan 27 14:23:03 crc kubenswrapper[4914]: I0127 14:23:03.288993 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" event={"ID":"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6","Type":"ContainerDied","Data":"899fbe52f09758f62d371d263b7a6addeea6ea11f20633bda920c877831dc9e8"} Jan 27 14:23:03 crc kubenswrapper[4914]: I0127 14:23:03.820026 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:23:03 crc kubenswrapper[4914]: I0127 14:23:03.820392 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:23:03 crc kubenswrapper[4914]: I0127 14:23:03.865735 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.369110 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.423516 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz9xz"] Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.718668 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.837681 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85vfp\" (UniqueName: \"kubernetes.io/projected/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-kube-api-access-85vfp\") pod \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.837742 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ssh-key-openstack-edpm-ipam\") pod \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.837793 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ovn-combined-ca-bundle\") pod \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.837899 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-inventory\") pod \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.837987 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ovncontroller-config-0\") pod \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\" (UID: \"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6\") " Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.844772 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-kube-api-access-85vfp" (OuterVolumeSpecName: "kube-api-access-85vfp") pod "efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6" (UID: "efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6"). InnerVolumeSpecName "kube-api-access-85vfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.856405 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6" (UID: "efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.867861 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6" (UID: "efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.870135 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-inventory" (OuterVolumeSpecName: "inventory") pod "efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6" (UID: "efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.870703 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6" (UID: "efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.940551 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85vfp\" (UniqueName: \"kubernetes.io/projected/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-kube-api-access-85vfp\") on node \"crc\" DevicePath \"\"" Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.940596 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.940607 4914 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.940617 4914 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:23:04 crc kubenswrapper[4914]: I0127 14:23:04.940627 4914 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.306480 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" event={"ID":"efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6","Type":"ContainerDied","Data":"442a28595aec1626e0ff1f83cb91f2ce805666640fdf4ba6ab625ce0dfc501d3"} Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.306794 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="442a28595aec1626e0ff1f83cb91f2ce805666640fdf4ba6ab625ce0dfc501d3" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.306526 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9tpnf" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.410153 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h"] Jan 27 14:23:05 crc kubenswrapper[4914]: E0127 14:23:05.410683 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.410701 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 14:23:05 crc kubenswrapper[4914]: E0127 14:23:05.410753 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="981c625d-0b9b-4128-942d-11cbf8fe23c2" containerName="registry-server" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.410760 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="981c625d-0b9b-4128-942d-11cbf8fe23c2" containerName="registry-server" Jan 27 14:23:05 crc kubenswrapper[4914]: E0127 14:23:05.410780 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="981c625d-0b9b-4128-942d-11cbf8fe23c2" containerName="extract-utilities" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.410787 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="981c625d-0b9b-4128-942d-11cbf8fe23c2" containerName="extract-utilities" Jan 27 14:23:05 crc kubenswrapper[4914]: E0127 14:23:05.410809 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="981c625d-0b9b-4128-942d-11cbf8fe23c2" containerName="extract-content" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.410816 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="981c625d-0b9b-4128-942d-11cbf8fe23c2" containerName="extract-content" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.411050 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.411066 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="981c625d-0b9b-4128-942d-11cbf8fe23c2" containerName="registry-server" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.418778 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.420631 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h"] Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.421559 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.421912 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.422395 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m5jxs" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.422699 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.422970 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.422969 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.551907 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn4m7\" (UniqueName: \"kubernetes.io/projected/8b57afc2-5e5e-4268-ac55-f6237fb3f284-kube-api-access-xn4m7\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.551982 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.552076 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.552132 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.552301 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.552340 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.654008 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.654131 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.654162 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.654240 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.654268 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.654324 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn4m7\" (UniqueName: \"kubernetes.io/projected/8b57afc2-5e5e-4268-ac55-f6237fb3f284-kube-api-access-xn4m7\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.659389 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.659760 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.659980 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.661077 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.665451 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.674365 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn4m7\" (UniqueName: \"kubernetes.io/projected/8b57afc2-5e5e-4268-ac55-f6237fb3f284-kube-api-access-xn4m7\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:05 crc kubenswrapper[4914]: I0127 14:23:05.745136 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:06 crc kubenswrapper[4914]: I0127 14:23:06.315157 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wz9xz" podUID="0a71458c-6815-4c11-94b0-a16d029821e3" containerName="registry-server" containerID="cri-o://f1c66b2afde081d85af0134ac4a800ad15a477911a3439fae1a931b2196b1a6e" gracePeriod=2 Jan 27 14:23:06 crc kubenswrapper[4914]: I0127 14:23:06.348122 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h"] Jan 27 14:23:06 crc kubenswrapper[4914]: I0127 14:23:06.799973 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:23:06 crc kubenswrapper[4914]: I0127 14:23:06.881580 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a71458c-6815-4c11-94b0-a16d029821e3-utilities\") pod \"0a71458c-6815-4c11-94b0-a16d029821e3\" (UID: \"0a71458c-6815-4c11-94b0-a16d029821e3\") " Jan 27 14:23:06 crc kubenswrapper[4914]: I0127 14:23:06.881669 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a71458c-6815-4c11-94b0-a16d029821e3-catalog-content\") pod \"0a71458c-6815-4c11-94b0-a16d029821e3\" (UID: \"0a71458c-6815-4c11-94b0-a16d029821e3\") " Jan 27 14:23:06 crc kubenswrapper[4914]: I0127 14:23:06.883333 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a71458c-6815-4c11-94b0-a16d029821e3-utilities" (OuterVolumeSpecName: "utilities") pod "0a71458c-6815-4c11-94b0-a16d029821e3" (UID: "0a71458c-6815-4c11-94b0-a16d029821e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:23:06 crc kubenswrapper[4914]: I0127 14:23:06.883530 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlzr4\" (UniqueName: \"kubernetes.io/projected/0a71458c-6815-4c11-94b0-a16d029821e3-kube-api-access-zlzr4\") pod \"0a71458c-6815-4c11-94b0-a16d029821e3\" (UID: \"0a71458c-6815-4c11-94b0-a16d029821e3\") " Jan 27 14:23:06 crc kubenswrapper[4914]: I0127 14:23:06.884227 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a71458c-6815-4c11-94b0-a16d029821e3-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:23:06 crc kubenswrapper[4914]: I0127 14:23:06.902021 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a71458c-6815-4c11-94b0-a16d029821e3-kube-api-access-zlzr4" (OuterVolumeSpecName: "kube-api-access-zlzr4") pod "0a71458c-6815-4c11-94b0-a16d029821e3" (UID: "0a71458c-6815-4c11-94b0-a16d029821e3"). InnerVolumeSpecName "kube-api-access-zlzr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:23:06 crc kubenswrapper[4914]: I0127 14:23:06.986697 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlzr4\" (UniqueName: \"kubernetes.io/projected/0a71458c-6815-4c11-94b0-a16d029821e3-kube-api-access-zlzr4\") on node \"crc\" DevicePath \"\"" Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.322633 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" event={"ID":"8b57afc2-5e5e-4268-ac55-f6237fb3f284","Type":"ContainerStarted","Data":"7de8e8110848b0c4cb1bbcfe51658276fe774cecd05bd64071bcbd33286e49f4"} Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.325285 4914 generic.go:334] "Generic (PLEG): container finished" podID="0a71458c-6815-4c11-94b0-a16d029821e3" containerID="f1c66b2afde081d85af0134ac4a800ad15a477911a3439fae1a931b2196b1a6e" exitCode=0 Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.325311 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz9xz" event={"ID":"0a71458c-6815-4c11-94b0-a16d029821e3","Type":"ContainerDied","Data":"f1c66b2afde081d85af0134ac4a800ad15a477911a3439fae1a931b2196b1a6e"} Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.325331 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz9xz" event={"ID":"0a71458c-6815-4c11-94b0-a16d029821e3","Type":"ContainerDied","Data":"2f468c69679cdeff0ff11c5eea9488dcd9cd9ad4de3a0eddb70d77ad4faf8871"} Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.325340 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wz9xz" Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.325348 4914 scope.go:117] "RemoveContainer" containerID="f1c66b2afde081d85af0134ac4a800ad15a477911a3439fae1a931b2196b1a6e" Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.342029 4914 scope.go:117] "RemoveContainer" containerID="a926df0d18121a52c9007d4e895b4e083bcee2a4246763438e4d857ee61069ab" Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.360696 4914 scope.go:117] "RemoveContainer" containerID="ab1d9fe3c162bd71aad7a15a5a292b590ec356dd5195782dd6696c2bbb11cd39" Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.398601 4914 scope.go:117] "RemoveContainer" containerID="f1c66b2afde081d85af0134ac4a800ad15a477911a3439fae1a931b2196b1a6e" Jan 27 14:23:07 crc kubenswrapper[4914]: E0127 14:23:07.398975 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1c66b2afde081d85af0134ac4a800ad15a477911a3439fae1a931b2196b1a6e\": container with ID starting with f1c66b2afde081d85af0134ac4a800ad15a477911a3439fae1a931b2196b1a6e not found: ID does not exist" containerID="f1c66b2afde081d85af0134ac4a800ad15a477911a3439fae1a931b2196b1a6e" Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.399003 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1c66b2afde081d85af0134ac4a800ad15a477911a3439fae1a931b2196b1a6e"} err="failed to get container status \"f1c66b2afde081d85af0134ac4a800ad15a477911a3439fae1a931b2196b1a6e\": rpc error: code = NotFound desc = could not find container \"f1c66b2afde081d85af0134ac4a800ad15a477911a3439fae1a931b2196b1a6e\": container with ID starting with f1c66b2afde081d85af0134ac4a800ad15a477911a3439fae1a931b2196b1a6e not found: ID does not exist" Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.399026 4914 scope.go:117] "RemoveContainer" containerID="a926df0d18121a52c9007d4e895b4e083bcee2a4246763438e4d857ee61069ab" Jan 27 14:23:07 crc kubenswrapper[4914]: E0127 14:23:07.399341 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a926df0d18121a52c9007d4e895b4e083bcee2a4246763438e4d857ee61069ab\": container with ID starting with a926df0d18121a52c9007d4e895b4e083bcee2a4246763438e4d857ee61069ab not found: ID does not exist" containerID="a926df0d18121a52c9007d4e895b4e083bcee2a4246763438e4d857ee61069ab" Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.399385 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a926df0d18121a52c9007d4e895b4e083bcee2a4246763438e4d857ee61069ab"} err="failed to get container status \"a926df0d18121a52c9007d4e895b4e083bcee2a4246763438e4d857ee61069ab\": rpc error: code = NotFound desc = could not find container \"a926df0d18121a52c9007d4e895b4e083bcee2a4246763438e4d857ee61069ab\": container with ID starting with a926df0d18121a52c9007d4e895b4e083bcee2a4246763438e4d857ee61069ab not found: ID does not exist" Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.399407 4914 scope.go:117] "RemoveContainer" containerID="ab1d9fe3c162bd71aad7a15a5a292b590ec356dd5195782dd6696c2bbb11cd39" Jan 27 14:23:07 crc kubenswrapper[4914]: E0127 14:23:07.399636 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab1d9fe3c162bd71aad7a15a5a292b590ec356dd5195782dd6696c2bbb11cd39\": container with ID starting with ab1d9fe3c162bd71aad7a15a5a292b590ec356dd5195782dd6696c2bbb11cd39 not found: ID does not exist" containerID="ab1d9fe3c162bd71aad7a15a5a292b590ec356dd5195782dd6696c2bbb11cd39" Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.399658 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab1d9fe3c162bd71aad7a15a5a292b590ec356dd5195782dd6696c2bbb11cd39"} err="failed to get container status \"ab1d9fe3c162bd71aad7a15a5a292b590ec356dd5195782dd6696c2bbb11cd39\": rpc error: code = NotFound desc = could not find container \"ab1d9fe3c162bd71aad7a15a5a292b590ec356dd5195782dd6696c2bbb11cd39\": container with ID starting with ab1d9fe3c162bd71aad7a15a5a292b590ec356dd5195782dd6696c2bbb11cd39 not found: ID does not exist" Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.420403 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a71458c-6815-4c11-94b0-a16d029821e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a71458c-6815-4c11-94b0-a16d029821e3" (UID: "0a71458c-6815-4c11-94b0-a16d029821e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.496105 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a71458c-6815-4c11-94b0-a16d029821e3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.670840 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz9xz"] Jan 27 14:23:07 crc kubenswrapper[4914]: I0127 14:23:07.683395 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz9xz"] Jan 27 14:23:08 crc kubenswrapper[4914]: I0127 14:23:08.327442 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a71458c-6815-4c11-94b0-a16d029821e3" path="/var/lib/kubelet/pods/0a71458c-6815-4c11-94b0-a16d029821e3/volumes" Jan 27 14:23:08 crc kubenswrapper[4914]: I0127 14:23:08.338666 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" event={"ID":"8b57afc2-5e5e-4268-ac55-f6237fb3f284","Type":"ContainerStarted","Data":"7d742906d2b7e0720b7175e8d46a8881e163f52c41b625b9a8a33d5e84dc0a31"} Jan 27 14:23:08 crc kubenswrapper[4914]: I0127 14:23:08.364187 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" podStartSLOduration=2.256645012 podStartE2EDuration="3.364169185s" podCreationTimestamp="2026-01-27 14:23:05 +0000 UTC" firstStartedPulling="2026-01-27 14:23:06.423333506 +0000 UTC m=+2344.735683601" lastFinishedPulling="2026-01-27 14:23:07.530857669 +0000 UTC m=+2345.843207774" observedRunningTime="2026-01-27 14:23:08.356799314 +0000 UTC m=+2346.669149419" watchObservedRunningTime="2026-01-27 14:23:08.364169185 +0000 UTC m=+2346.676519270" Jan 27 14:23:12 crc kubenswrapper[4914]: I0127 14:23:12.308819 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:23:12 crc kubenswrapper[4914]: E0127 14:23:12.309815 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:23:23 crc kubenswrapper[4914]: I0127 14:23:23.293787 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:23:23 crc kubenswrapper[4914]: E0127 14:23:23.294706 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:23:38 crc kubenswrapper[4914]: I0127 14:23:38.295037 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:23:38 crc kubenswrapper[4914]: E0127 14:23:38.295768 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:23:50 crc kubenswrapper[4914]: I0127 14:23:50.295447 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:23:50 crc kubenswrapper[4914]: E0127 14:23:50.297373 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:23:54 crc kubenswrapper[4914]: I0127 14:23:54.733172 4914 generic.go:334] "Generic (PLEG): container finished" podID="8b57afc2-5e5e-4268-ac55-f6237fb3f284" containerID="7d742906d2b7e0720b7175e8d46a8881e163f52c41b625b9a8a33d5e84dc0a31" exitCode=0 Jan 27 14:23:54 crc kubenswrapper[4914]: I0127 14:23:54.733257 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" event={"ID":"8b57afc2-5e5e-4268-ac55-f6237fb3f284","Type":"ContainerDied","Data":"7d742906d2b7e0720b7175e8d46a8881e163f52c41b625b9a8a33d5e84dc0a31"} Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.248873 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.397173 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-nova-metadata-neutron-config-0\") pod \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.397307 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-ssh-key-openstack-edpm-ipam\") pod \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.397503 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-neutron-ovn-metadata-agent-neutron-config-0\") pod \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.397613 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-inventory\") pod \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.397660 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-neutron-metadata-combined-ca-bundle\") pod \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.397991 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn4m7\" (UniqueName: \"kubernetes.io/projected/8b57afc2-5e5e-4268-ac55-f6237fb3f284-kube-api-access-xn4m7\") pod \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\" (UID: \"8b57afc2-5e5e-4268-ac55-f6237fb3f284\") " Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.406078 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "8b57afc2-5e5e-4268-ac55-f6237fb3f284" (UID: "8b57afc2-5e5e-4268-ac55-f6237fb3f284"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.408996 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b57afc2-5e5e-4268-ac55-f6237fb3f284-kube-api-access-xn4m7" (OuterVolumeSpecName: "kube-api-access-xn4m7") pod "8b57afc2-5e5e-4268-ac55-f6237fb3f284" (UID: "8b57afc2-5e5e-4268-ac55-f6237fb3f284"). InnerVolumeSpecName "kube-api-access-xn4m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.436037 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8b57afc2-5e5e-4268-ac55-f6237fb3f284" (UID: "8b57afc2-5e5e-4268-ac55-f6237fb3f284"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.446046 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "8b57afc2-5e5e-4268-ac55-f6237fb3f284" (UID: "8b57afc2-5e5e-4268-ac55-f6237fb3f284"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.450820 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-inventory" (OuterVolumeSpecName: "inventory") pod "8b57afc2-5e5e-4268-ac55-f6237fb3f284" (UID: "8b57afc2-5e5e-4268-ac55-f6237fb3f284"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.472036 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "8b57afc2-5e5e-4268-ac55-f6237fb3f284" (UID: "8b57afc2-5e5e-4268-ac55-f6237fb3f284"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.500620 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn4m7\" (UniqueName: \"kubernetes.io/projected/8b57afc2-5e5e-4268-ac55-f6237fb3f284-kube-api-access-xn4m7\") on node \"crc\" DevicePath \"\"" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.500657 4914 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.500674 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.500690 4914 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.500704 4914 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.500717 4914 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b57afc2-5e5e-4268-ac55-f6237fb3f284-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.753954 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" event={"ID":"8b57afc2-5e5e-4268-ac55-f6237fb3f284","Type":"ContainerDied","Data":"7de8e8110848b0c4cb1bbcfe51658276fe774cecd05bd64071bcbd33286e49f4"} Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.754000 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7de8e8110848b0c4cb1bbcfe51658276fe774cecd05bd64071bcbd33286e49f4" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.754019 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.857674 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x"] Jan 27 14:23:56 crc kubenswrapper[4914]: E0127 14:23:56.858138 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a71458c-6815-4c11-94b0-a16d029821e3" containerName="extract-content" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.858155 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a71458c-6815-4c11-94b0-a16d029821e3" containerName="extract-content" Jan 27 14:23:56 crc kubenswrapper[4914]: E0127 14:23:56.858187 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a71458c-6815-4c11-94b0-a16d029821e3" containerName="extract-utilities" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.858194 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a71458c-6815-4c11-94b0-a16d029821e3" containerName="extract-utilities" Jan 27 14:23:56 crc kubenswrapper[4914]: E0127 14:23:56.858206 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b57afc2-5e5e-4268-ac55-f6237fb3f284" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.858215 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b57afc2-5e5e-4268-ac55-f6237fb3f284" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 14:23:56 crc kubenswrapper[4914]: E0127 14:23:56.858224 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a71458c-6815-4c11-94b0-a16d029821e3" containerName="registry-server" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.858230 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a71458c-6815-4c11-94b0-a16d029821e3" containerName="registry-server" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.858405 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a71458c-6815-4c11-94b0-a16d029821e3" containerName="registry-server" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.858429 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b57afc2-5e5e-4268-ac55-f6237fb3f284" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.859178 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.861392 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.861710 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.862254 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.862460 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.862516 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m5jxs" Jan 27 14:23:56 crc kubenswrapper[4914]: I0127 14:23:56.868533 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x"] Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.009768 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.009869 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.010229 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.010340 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.010383 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwkg9\" (UniqueName: \"kubernetes.io/projected/900ff3a0-360d-4537-8bad-c2667319bb93-kube-api-access-zwkg9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.112235 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.112313 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.112397 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.112435 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.112464 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwkg9\" (UniqueName: \"kubernetes.io/projected/900ff3a0-360d-4537-8bad-c2667319bb93-kube-api-access-zwkg9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.117908 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.117921 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.118176 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.119225 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.128410 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwkg9\" (UniqueName: \"kubernetes.io/projected/900ff3a0-360d-4537-8bad-c2667319bb93-kube-api-access-zwkg9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.229524 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.728523 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x"] Jan 27 14:23:57 crc kubenswrapper[4914]: I0127 14:23:57.766343 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" event={"ID":"900ff3a0-360d-4537-8bad-c2667319bb93","Type":"ContainerStarted","Data":"4b2695b9c3d839f006fe2aba0545fb46811f73d3bc970be4fbcdf082b4e3746d"} Jan 27 14:23:58 crc kubenswrapper[4914]: I0127 14:23:58.777755 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" event={"ID":"900ff3a0-360d-4537-8bad-c2667319bb93","Type":"ContainerStarted","Data":"11ef873dc94327ccf2c852a3616761a1dd188bda5ff88cf6823302f627a311e1"} Jan 27 14:23:58 crc kubenswrapper[4914]: I0127 14:23:58.804803 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" podStartSLOduration=2.301069033 podStartE2EDuration="2.804778979s" podCreationTimestamp="2026-01-27 14:23:56 +0000 UTC" firstStartedPulling="2026-01-27 14:23:57.729466474 +0000 UTC m=+2396.041816569" lastFinishedPulling="2026-01-27 14:23:58.23317643 +0000 UTC m=+2396.545526515" observedRunningTime="2026-01-27 14:23:58.795256479 +0000 UTC m=+2397.107606564" watchObservedRunningTime="2026-01-27 14:23:58.804778979 +0000 UTC m=+2397.117129084" Jan 27 14:24:02 crc kubenswrapper[4914]: I0127 14:24:02.301844 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:24:02 crc kubenswrapper[4914]: E0127 14:24:02.304824 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:24:13 crc kubenswrapper[4914]: I0127 14:24:13.295666 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:24:13 crc kubenswrapper[4914]: E0127 14:24:13.296508 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:24:27 crc kubenswrapper[4914]: I0127 14:24:27.294526 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:24:27 crc kubenswrapper[4914]: E0127 14:24:27.295608 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:24:41 crc kubenswrapper[4914]: I0127 14:24:41.294797 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:24:41 crc kubenswrapper[4914]: E0127 14:24:41.295626 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:24:53 crc kubenswrapper[4914]: I0127 14:24:53.294510 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:24:53 crc kubenswrapper[4914]: E0127 14:24:53.295164 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:25:04 crc kubenswrapper[4914]: I0127 14:25:04.294856 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:25:04 crc kubenswrapper[4914]: E0127 14:25:04.295709 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:25:17 crc kubenswrapper[4914]: I0127 14:25:17.294287 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:25:17 crc kubenswrapper[4914]: E0127 14:25:17.295150 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:25:28 crc kubenswrapper[4914]: I0127 14:25:28.294496 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:25:28 crc kubenswrapper[4914]: E0127 14:25:28.296267 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:25:42 crc kubenswrapper[4914]: I0127 14:25:42.301437 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:25:42 crc kubenswrapper[4914]: E0127 14:25:42.302342 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:25:56 crc kubenswrapper[4914]: I0127 14:25:56.294879 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:25:56 crc kubenswrapper[4914]: E0127 14:25:56.295936 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:26:11 crc kubenswrapper[4914]: I0127 14:26:11.294038 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:26:11 crc kubenswrapper[4914]: E0127 14:26:11.294933 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:26:24 crc kubenswrapper[4914]: I0127 14:26:24.295176 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:26:24 crc kubenswrapper[4914]: E0127 14:26:24.297210 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:26:37 crc kubenswrapper[4914]: I0127 14:26:37.296263 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:26:37 crc kubenswrapper[4914]: E0127 14:26:37.297017 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:26:51 crc kubenswrapper[4914]: I0127 14:26:51.295628 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:26:51 crc kubenswrapper[4914]: E0127 14:26:51.296585 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:27:03 crc kubenswrapper[4914]: I0127 14:27:03.294711 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:27:03 crc kubenswrapper[4914]: E0127 14:27:03.295700 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:27:14 crc kubenswrapper[4914]: I0127 14:27:14.294603 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:27:15 crc kubenswrapper[4914]: I0127 14:27:15.531982 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerStarted","Data":"3642bb8730ee946e0fc1bb6288bad6c73ce709eb88f502af6460b4dc68554a20"} Jan 27 14:28:11 crc kubenswrapper[4914]: I0127 14:28:11.022262 4914 generic.go:334] "Generic (PLEG): container finished" podID="900ff3a0-360d-4537-8bad-c2667319bb93" containerID="11ef873dc94327ccf2c852a3616761a1dd188bda5ff88cf6823302f627a311e1" exitCode=0 Jan 27 14:28:11 crc kubenswrapper[4914]: I0127 14:28:11.022797 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" event={"ID":"900ff3a0-360d-4537-8bad-c2667319bb93","Type":"ContainerDied","Data":"11ef873dc94327ccf2c852a3616761a1dd188bda5ff88cf6823302f627a311e1"} Jan 27 14:28:12 crc kubenswrapper[4914]: I0127 14:28:12.441322 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:28:12 crc kubenswrapper[4914]: I0127 14:28:12.622784 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-ssh-key-openstack-edpm-ipam\") pod \"900ff3a0-360d-4537-8bad-c2667319bb93\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " Jan 27 14:28:12 crc kubenswrapper[4914]: I0127 14:28:12.622999 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwkg9\" (UniqueName: \"kubernetes.io/projected/900ff3a0-360d-4537-8bad-c2667319bb93-kube-api-access-zwkg9\") pod \"900ff3a0-360d-4537-8bad-c2667319bb93\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " Jan 27 14:28:12 crc kubenswrapper[4914]: I0127 14:28:12.623128 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-libvirt-combined-ca-bundle\") pod \"900ff3a0-360d-4537-8bad-c2667319bb93\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " Jan 27 14:28:12 crc kubenswrapper[4914]: I0127 14:28:12.623190 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-libvirt-secret-0\") pod \"900ff3a0-360d-4537-8bad-c2667319bb93\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " Jan 27 14:28:12 crc kubenswrapper[4914]: I0127 14:28:12.623222 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-inventory\") pod \"900ff3a0-360d-4537-8bad-c2667319bb93\" (UID: \"900ff3a0-360d-4537-8bad-c2667319bb93\") " Jan 27 14:28:12 crc kubenswrapper[4914]: I0127 14:28:12.629210 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900ff3a0-360d-4537-8bad-c2667319bb93-kube-api-access-zwkg9" (OuterVolumeSpecName: "kube-api-access-zwkg9") pod "900ff3a0-360d-4537-8bad-c2667319bb93" (UID: "900ff3a0-360d-4537-8bad-c2667319bb93"). InnerVolumeSpecName "kube-api-access-zwkg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:28:12 crc kubenswrapper[4914]: I0127 14:28:12.636146 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "900ff3a0-360d-4537-8bad-c2667319bb93" (UID: "900ff3a0-360d-4537-8bad-c2667319bb93"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:28:12 crc kubenswrapper[4914]: I0127 14:28:12.651483 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "900ff3a0-360d-4537-8bad-c2667319bb93" (UID: "900ff3a0-360d-4537-8bad-c2667319bb93"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:28:12 crc kubenswrapper[4914]: I0127 14:28:12.675140 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "900ff3a0-360d-4537-8bad-c2667319bb93" (UID: "900ff3a0-360d-4537-8bad-c2667319bb93"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:28:12 crc kubenswrapper[4914]: I0127 14:28:12.677614 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-inventory" (OuterVolumeSpecName: "inventory") pod "900ff3a0-360d-4537-8bad-c2667319bb93" (UID: "900ff3a0-360d-4537-8bad-c2667319bb93"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:28:12 crc kubenswrapper[4914]: I0127 14:28:12.728721 4914 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:28:12 crc kubenswrapper[4914]: I0127 14:28:12.728768 4914 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:28:12 crc kubenswrapper[4914]: I0127 14:28:12.728781 4914 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:28:12 crc kubenswrapper[4914]: I0127 14:28:12.728794 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/900ff3a0-360d-4537-8bad-c2667319bb93-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:28:12 crc kubenswrapper[4914]: I0127 14:28:12.728807 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwkg9\" (UniqueName: \"kubernetes.io/projected/900ff3a0-360d-4537-8bad-c2667319bb93-kube-api-access-zwkg9\") on node \"crc\" DevicePath \"\"" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.041276 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" event={"ID":"900ff3a0-360d-4537-8bad-c2667319bb93","Type":"ContainerDied","Data":"4b2695b9c3d839f006fe2aba0545fb46811f73d3bc970be4fbcdf082b4e3746d"} Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.041318 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b2695b9c3d839f006fe2aba0545fb46811f73d3bc970be4fbcdf082b4e3746d" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.041388 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.132950 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd"] Jan 27 14:28:13 crc kubenswrapper[4914]: E0127 14:28:13.133510 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900ff3a0-360d-4537-8bad-c2667319bb93" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.133533 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="900ff3a0-360d-4537-8bad-c2667319bb93" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.133830 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="900ff3a0-360d-4537-8bad-c2667319bb93" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.134657 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.145947 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.146378 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.146398 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m5jxs" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.146517 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.148274 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.148538 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.148673 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.150530 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd"] Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.235382 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.235441 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.235576 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvsfd\" (UniqueName: \"kubernetes.io/projected/2f1076b4-75fa-4313-b8f7-7071b8da40d2-kube-api-access-fvsfd\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.235619 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.235665 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.235701 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.235761 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.235792 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.235871 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.336703 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.337401 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.337445 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.337503 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.337577 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.338333 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.338426 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvsfd\" (UniqueName: \"kubernetes.io/projected/2f1076b4-75fa-4313-b8f7-7071b8da40d2-kube-api-access-fvsfd\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.338471 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.338518 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.338916 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.342747 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.343709 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.344637 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.344772 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.345055 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.345072 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.348107 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.361176 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvsfd\" (UniqueName: \"kubernetes.io/projected/2f1076b4-75fa-4313-b8f7-7071b8da40d2-kube-api-access-fvsfd\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qbcgd\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:13 crc kubenswrapper[4914]: I0127 14:28:13.519393 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:28:14 crc kubenswrapper[4914]: I0127 14:28:14.026277 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd"] Jan 27 14:28:14 crc kubenswrapper[4914]: I0127 14:28:14.033318 4914 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:28:14 crc kubenswrapper[4914]: I0127 14:28:14.051952 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" event={"ID":"2f1076b4-75fa-4313-b8f7-7071b8da40d2","Type":"ContainerStarted","Data":"7539d8aa72645553a0bd214e3c232c124cec444a63ae8d83af5a0af4d684426c"} Jan 27 14:28:15 crc kubenswrapper[4914]: I0127 14:28:15.061875 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" event={"ID":"2f1076b4-75fa-4313-b8f7-7071b8da40d2","Type":"ContainerStarted","Data":"5e51a76908246f6ac11af6a11771fd4b2b4899f2869215c07d3d04daa01c5625"} Jan 27 14:28:15 crc kubenswrapper[4914]: I0127 14:28:15.083358 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" podStartSLOduration=1.528292926 podStartE2EDuration="2.083336402s" podCreationTimestamp="2026-01-27 14:28:13 +0000 UTC" firstStartedPulling="2026-01-27 14:28:14.033032504 +0000 UTC m=+2652.345382589" lastFinishedPulling="2026-01-27 14:28:14.58807598 +0000 UTC m=+2652.900426065" observedRunningTime="2026-01-27 14:28:15.081582894 +0000 UTC m=+2653.393932999" watchObservedRunningTime="2026-01-27 14:28:15.083336402 +0000 UTC m=+2653.395686497" Jan 27 14:29:37 crc kubenswrapper[4914]: I0127 14:29:37.690755 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:29:37 crc kubenswrapper[4914]: I0127 14:29:37.691382 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:30:00 crc kubenswrapper[4914]: I0127 14:30:00.159927 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq"] Jan 27 14:30:00 crc kubenswrapper[4914]: I0127 14:30:00.162210 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq" Jan 27 14:30:00 crc kubenswrapper[4914]: I0127 14:30:00.165227 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 14:30:00 crc kubenswrapper[4914]: I0127 14:30:00.175024 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq"] Jan 27 14:30:00 crc kubenswrapper[4914]: I0127 14:30:00.177477 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 14:30:00 crc kubenswrapper[4914]: I0127 14:30:00.362177 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54a2492b-2418-4869-8c96-492f9fae2dce-secret-volume\") pod \"collect-profiles-29492070-lrbjq\" (UID: \"54a2492b-2418-4869-8c96-492f9fae2dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq" Jan 27 14:30:00 crc kubenswrapper[4914]: I0127 14:30:00.362227 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54a2492b-2418-4869-8c96-492f9fae2dce-config-volume\") pod \"collect-profiles-29492070-lrbjq\" (UID: \"54a2492b-2418-4869-8c96-492f9fae2dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq" Jan 27 14:30:00 crc kubenswrapper[4914]: I0127 14:30:00.363057 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffrsw\" (UniqueName: \"kubernetes.io/projected/54a2492b-2418-4869-8c96-492f9fae2dce-kube-api-access-ffrsw\") pod \"collect-profiles-29492070-lrbjq\" (UID: \"54a2492b-2418-4869-8c96-492f9fae2dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq" Jan 27 14:30:00 crc kubenswrapper[4914]: I0127 14:30:00.465414 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffrsw\" (UniqueName: \"kubernetes.io/projected/54a2492b-2418-4869-8c96-492f9fae2dce-kube-api-access-ffrsw\") pod \"collect-profiles-29492070-lrbjq\" (UID: \"54a2492b-2418-4869-8c96-492f9fae2dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq" Jan 27 14:30:00 crc kubenswrapper[4914]: I0127 14:30:00.465637 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54a2492b-2418-4869-8c96-492f9fae2dce-secret-volume\") pod \"collect-profiles-29492070-lrbjq\" (UID: \"54a2492b-2418-4869-8c96-492f9fae2dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq" Jan 27 14:30:00 crc kubenswrapper[4914]: I0127 14:30:00.465663 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54a2492b-2418-4869-8c96-492f9fae2dce-config-volume\") pod \"collect-profiles-29492070-lrbjq\" (UID: \"54a2492b-2418-4869-8c96-492f9fae2dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq" Jan 27 14:30:00 crc kubenswrapper[4914]: I0127 14:30:00.467255 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54a2492b-2418-4869-8c96-492f9fae2dce-config-volume\") pod \"collect-profiles-29492070-lrbjq\" (UID: \"54a2492b-2418-4869-8c96-492f9fae2dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq" Jan 27 14:30:00 crc kubenswrapper[4914]: I0127 14:30:00.473862 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54a2492b-2418-4869-8c96-492f9fae2dce-secret-volume\") pod \"collect-profiles-29492070-lrbjq\" (UID: \"54a2492b-2418-4869-8c96-492f9fae2dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq" Jan 27 14:30:00 crc kubenswrapper[4914]: I0127 14:30:00.484312 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffrsw\" (UniqueName: \"kubernetes.io/projected/54a2492b-2418-4869-8c96-492f9fae2dce-kube-api-access-ffrsw\") pod \"collect-profiles-29492070-lrbjq\" (UID: \"54a2492b-2418-4869-8c96-492f9fae2dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq" Jan 27 14:30:00 crc kubenswrapper[4914]: I0127 14:30:00.487211 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq" Jan 27 14:30:01 crc kubenswrapper[4914]: I0127 14:30:01.037643 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq"] Jan 27 14:30:01 crc kubenswrapper[4914]: I0127 14:30:01.995755 4914 generic.go:334] "Generic (PLEG): container finished" podID="54a2492b-2418-4869-8c96-492f9fae2dce" containerID="1588a4de8ee0432cbd7774488b58aa1c79f0ba1a843c14d835e727d5304f919f" exitCode=0 Jan 27 14:30:01 crc kubenswrapper[4914]: I0127 14:30:01.995922 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq" event={"ID":"54a2492b-2418-4869-8c96-492f9fae2dce","Type":"ContainerDied","Data":"1588a4de8ee0432cbd7774488b58aa1c79f0ba1a843c14d835e727d5304f919f"} Jan 27 14:30:01 crc kubenswrapper[4914]: I0127 14:30:01.996201 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq" event={"ID":"54a2492b-2418-4869-8c96-492f9fae2dce","Type":"ContainerStarted","Data":"75304f2492707beac9cf1ad6ed493d89b0cb8177686902fa015fbe78a520c826"} Jan 27 14:30:03 crc kubenswrapper[4914]: I0127 14:30:03.335596 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq" Jan 27 14:30:03 crc kubenswrapper[4914]: I0127 14:30:03.523687 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54a2492b-2418-4869-8c96-492f9fae2dce-config-volume\") pod \"54a2492b-2418-4869-8c96-492f9fae2dce\" (UID: \"54a2492b-2418-4869-8c96-492f9fae2dce\") " Jan 27 14:30:03 crc kubenswrapper[4914]: I0127 14:30:03.524255 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54a2492b-2418-4869-8c96-492f9fae2dce-config-volume" (OuterVolumeSpecName: "config-volume") pod "54a2492b-2418-4869-8c96-492f9fae2dce" (UID: "54a2492b-2418-4869-8c96-492f9fae2dce"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:03 crc kubenswrapper[4914]: I0127 14:30:03.524302 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffrsw\" (UniqueName: \"kubernetes.io/projected/54a2492b-2418-4869-8c96-492f9fae2dce-kube-api-access-ffrsw\") pod \"54a2492b-2418-4869-8c96-492f9fae2dce\" (UID: \"54a2492b-2418-4869-8c96-492f9fae2dce\") " Jan 27 14:30:03 crc kubenswrapper[4914]: I0127 14:30:03.524380 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54a2492b-2418-4869-8c96-492f9fae2dce-secret-volume\") pod \"54a2492b-2418-4869-8c96-492f9fae2dce\" (UID: \"54a2492b-2418-4869-8c96-492f9fae2dce\") " Jan 27 14:30:03 crc kubenswrapper[4914]: I0127 14:30:03.525138 4914 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54a2492b-2418-4869-8c96-492f9fae2dce-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:03 crc kubenswrapper[4914]: I0127 14:30:03.529496 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54a2492b-2418-4869-8c96-492f9fae2dce-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "54a2492b-2418-4869-8c96-492f9fae2dce" (UID: "54a2492b-2418-4869-8c96-492f9fae2dce"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:03 crc kubenswrapper[4914]: I0127 14:30:03.533321 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54a2492b-2418-4869-8c96-492f9fae2dce-kube-api-access-ffrsw" (OuterVolumeSpecName: "kube-api-access-ffrsw") pod "54a2492b-2418-4869-8c96-492f9fae2dce" (UID: "54a2492b-2418-4869-8c96-492f9fae2dce"). InnerVolumeSpecName "kube-api-access-ffrsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:03 crc kubenswrapper[4914]: I0127 14:30:03.626796 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffrsw\" (UniqueName: \"kubernetes.io/projected/54a2492b-2418-4869-8c96-492f9fae2dce-kube-api-access-ffrsw\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:03 crc kubenswrapper[4914]: I0127 14:30:03.626840 4914 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54a2492b-2418-4869-8c96-492f9fae2dce-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.013019 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq" event={"ID":"54a2492b-2418-4869-8c96-492f9fae2dce","Type":"ContainerDied","Data":"75304f2492707beac9cf1ad6ed493d89b0cb8177686902fa015fbe78a520c826"} Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.013066 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75304f2492707beac9cf1ad6ed493d89b0cb8177686902fa015fbe78a520c826" Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.013081 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-lrbjq" Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.364363 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t8qdz"] Jan 27 14:30:04 crc kubenswrapper[4914]: E0127 14:30:04.365020 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54a2492b-2418-4869-8c96-492f9fae2dce" containerName="collect-profiles" Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.365032 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="54a2492b-2418-4869-8c96-492f9fae2dce" containerName="collect-profiles" Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.365208 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="54a2492b-2418-4869-8c96-492f9fae2dce" containerName="collect-profiles" Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.367380 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.374133 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8qdz"] Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.429788 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26"] Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.438080 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492025-hbn26"] Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.543057 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61775761-ed28-4621-9177-b9e704d46cac-utilities\") pod \"certified-operators-t8qdz\" (UID: \"61775761-ed28-4621-9177-b9e704d46cac\") " pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.543143 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61775761-ed28-4621-9177-b9e704d46cac-catalog-content\") pod \"certified-operators-t8qdz\" (UID: \"61775761-ed28-4621-9177-b9e704d46cac\") " pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.543232 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lk2w\" (UniqueName: \"kubernetes.io/projected/61775761-ed28-4621-9177-b9e704d46cac-kube-api-access-5lk2w\") pod \"certified-operators-t8qdz\" (UID: \"61775761-ed28-4621-9177-b9e704d46cac\") " pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.645642 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61775761-ed28-4621-9177-b9e704d46cac-utilities\") pod \"certified-operators-t8qdz\" (UID: \"61775761-ed28-4621-9177-b9e704d46cac\") " pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.645717 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61775761-ed28-4621-9177-b9e704d46cac-catalog-content\") pod \"certified-operators-t8qdz\" (UID: \"61775761-ed28-4621-9177-b9e704d46cac\") " pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.645789 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lk2w\" (UniqueName: \"kubernetes.io/projected/61775761-ed28-4621-9177-b9e704d46cac-kube-api-access-5lk2w\") pod \"certified-operators-t8qdz\" (UID: \"61775761-ed28-4621-9177-b9e704d46cac\") " pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.646265 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61775761-ed28-4621-9177-b9e704d46cac-utilities\") pod \"certified-operators-t8qdz\" (UID: \"61775761-ed28-4621-9177-b9e704d46cac\") " pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.646439 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61775761-ed28-4621-9177-b9e704d46cac-catalog-content\") pod \"certified-operators-t8qdz\" (UID: \"61775761-ed28-4621-9177-b9e704d46cac\") " pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.667480 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lk2w\" (UniqueName: \"kubernetes.io/projected/61775761-ed28-4621-9177-b9e704d46cac-kube-api-access-5lk2w\") pod \"certified-operators-t8qdz\" (UID: \"61775761-ed28-4621-9177-b9e704d46cac\") " pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:04 crc kubenswrapper[4914]: I0127 14:30:04.684121 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:05 crc kubenswrapper[4914]: I0127 14:30:05.201425 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8qdz"] Jan 27 14:30:06 crc kubenswrapper[4914]: I0127 14:30:06.050145 4914 generic.go:334] "Generic (PLEG): container finished" podID="61775761-ed28-4621-9177-b9e704d46cac" containerID="1431e481a2486b03b2aaff64e5490f23b9f683abe8593dff8baa9b2cd5443170" exitCode=0 Jan 27 14:30:06 crc kubenswrapper[4914]: I0127 14:30:06.050220 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8qdz" event={"ID":"61775761-ed28-4621-9177-b9e704d46cac","Type":"ContainerDied","Data":"1431e481a2486b03b2aaff64e5490f23b9f683abe8593dff8baa9b2cd5443170"} Jan 27 14:30:06 crc kubenswrapper[4914]: I0127 14:30:06.050438 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8qdz" event={"ID":"61775761-ed28-4621-9177-b9e704d46cac","Type":"ContainerStarted","Data":"7b44723381ffd807aece1548473e9f33da62413a043708139205033b5b9d7c2e"} Jan 27 14:30:06 crc kubenswrapper[4914]: I0127 14:30:06.306725 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c42d7bb3-fffc-4dd8-bc41-151b5b2df45d" path="/var/lib/kubelet/pods/c42d7bb3-fffc-4dd8-bc41-151b5b2df45d/volumes" Jan 27 14:30:07 crc kubenswrapper[4914]: I0127 14:30:07.060688 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8qdz" event={"ID":"61775761-ed28-4621-9177-b9e704d46cac","Type":"ContainerStarted","Data":"30c1b8e50c9afe499cd92cbd581f9c5f5b18b0d2090e7538667048b0000304da"} Jan 27 14:30:07 crc kubenswrapper[4914]: I0127 14:30:07.565477 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vvdwq"] Jan 27 14:30:07 crc kubenswrapper[4914]: I0127 14:30:07.568064 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:07 crc kubenswrapper[4914]: I0127 14:30:07.578815 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvdwq"] Jan 27 14:30:07 crc kubenswrapper[4914]: I0127 14:30:07.691006 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:30:07 crc kubenswrapper[4914]: I0127 14:30:07.691063 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:30:07 crc kubenswrapper[4914]: I0127 14:30:07.707505 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb40c293-9f54-44fe-b48c-c58f30f18fdb-catalog-content\") pod \"redhat-operators-vvdwq\" (UID: \"bb40c293-9f54-44fe-b48c-c58f30f18fdb\") " pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:07 crc kubenswrapper[4914]: I0127 14:30:07.707571 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22k6x\" (UniqueName: \"kubernetes.io/projected/bb40c293-9f54-44fe-b48c-c58f30f18fdb-kube-api-access-22k6x\") pod \"redhat-operators-vvdwq\" (UID: \"bb40c293-9f54-44fe-b48c-c58f30f18fdb\") " pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:07 crc kubenswrapper[4914]: I0127 14:30:07.707625 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb40c293-9f54-44fe-b48c-c58f30f18fdb-utilities\") pod \"redhat-operators-vvdwq\" (UID: \"bb40c293-9f54-44fe-b48c-c58f30f18fdb\") " pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:07 crc kubenswrapper[4914]: I0127 14:30:07.809119 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb40c293-9f54-44fe-b48c-c58f30f18fdb-utilities\") pod \"redhat-operators-vvdwq\" (UID: \"bb40c293-9f54-44fe-b48c-c58f30f18fdb\") " pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:07 crc kubenswrapper[4914]: I0127 14:30:07.809262 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb40c293-9f54-44fe-b48c-c58f30f18fdb-catalog-content\") pod \"redhat-operators-vvdwq\" (UID: \"bb40c293-9f54-44fe-b48c-c58f30f18fdb\") " pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:07 crc kubenswrapper[4914]: I0127 14:30:07.809309 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22k6x\" (UniqueName: \"kubernetes.io/projected/bb40c293-9f54-44fe-b48c-c58f30f18fdb-kube-api-access-22k6x\") pod \"redhat-operators-vvdwq\" (UID: \"bb40c293-9f54-44fe-b48c-c58f30f18fdb\") " pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:07 crc kubenswrapper[4914]: I0127 14:30:07.809670 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb40c293-9f54-44fe-b48c-c58f30f18fdb-utilities\") pod \"redhat-operators-vvdwq\" (UID: \"bb40c293-9f54-44fe-b48c-c58f30f18fdb\") " pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:07 crc kubenswrapper[4914]: I0127 14:30:07.809694 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb40c293-9f54-44fe-b48c-c58f30f18fdb-catalog-content\") pod \"redhat-operators-vvdwq\" (UID: \"bb40c293-9f54-44fe-b48c-c58f30f18fdb\") " pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:07 crc kubenswrapper[4914]: I0127 14:30:07.828971 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22k6x\" (UniqueName: \"kubernetes.io/projected/bb40c293-9f54-44fe-b48c-c58f30f18fdb-kube-api-access-22k6x\") pod \"redhat-operators-vvdwq\" (UID: \"bb40c293-9f54-44fe-b48c-c58f30f18fdb\") " pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:07 crc kubenswrapper[4914]: I0127 14:30:07.890687 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:08 crc kubenswrapper[4914]: I0127 14:30:08.085744 4914 generic.go:334] "Generic (PLEG): container finished" podID="61775761-ed28-4621-9177-b9e704d46cac" containerID="30c1b8e50c9afe499cd92cbd581f9c5f5b18b0d2090e7538667048b0000304da" exitCode=0 Jan 27 14:30:08 crc kubenswrapper[4914]: I0127 14:30:08.086104 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8qdz" event={"ID":"61775761-ed28-4621-9177-b9e704d46cac","Type":"ContainerDied","Data":"30c1b8e50c9afe499cd92cbd581f9c5f5b18b0d2090e7538667048b0000304da"} Jan 27 14:30:08 crc kubenswrapper[4914]: W0127 14:30:08.340562 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb40c293_9f54_44fe_b48c_c58f30f18fdb.slice/crio-c6dbcafc921209d3db2452042fcb08aebc8d3f632896d3d0b40733698dfa9bae WatchSource:0}: Error finding container c6dbcafc921209d3db2452042fcb08aebc8d3f632896d3d0b40733698dfa9bae: Status 404 returned error can't find the container with id c6dbcafc921209d3db2452042fcb08aebc8d3f632896d3d0b40733698dfa9bae Jan 27 14:30:08 crc kubenswrapper[4914]: I0127 14:30:08.345101 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvdwq"] Jan 27 14:30:09 crc kubenswrapper[4914]: I0127 14:30:09.095306 4914 generic.go:334] "Generic (PLEG): container finished" podID="bb40c293-9f54-44fe-b48c-c58f30f18fdb" containerID="196fdf83c9feba03823bfab2533d3638cfeeb48138f246b400fc38656d6e315e" exitCode=0 Jan 27 14:30:09 crc kubenswrapper[4914]: I0127 14:30:09.095349 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvdwq" event={"ID":"bb40c293-9f54-44fe-b48c-c58f30f18fdb","Type":"ContainerDied","Data":"196fdf83c9feba03823bfab2533d3638cfeeb48138f246b400fc38656d6e315e"} Jan 27 14:30:09 crc kubenswrapper[4914]: I0127 14:30:09.095682 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvdwq" event={"ID":"bb40c293-9f54-44fe-b48c-c58f30f18fdb","Type":"ContainerStarted","Data":"c6dbcafc921209d3db2452042fcb08aebc8d3f632896d3d0b40733698dfa9bae"} Jan 27 14:30:09 crc kubenswrapper[4914]: I0127 14:30:09.099111 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8qdz" event={"ID":"61775761-ed28-4621-9177-b9e704d46cac","Type":"ContainerStarted","Data":"aac11c7192f358dea2c8ba2cd0ee7ed723443fe36ae9292abc33c33a456bb6ff"} Jan 27 14:30:09 crc kubenswrapper[4914]: I0127 14:30:09.147410 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t8qdz" podStartSLOduration=2.51257454 podStartE2EDuration="5.147389626s" podCreationTimestamp="2026-01-27 14:30:04 +0000 UTC" firstStartedPulling="2026-01-27 14:30:06.053379884 +0000 UTC m=+2764.365729969" lastFinishedPulling="2026-01-27 14:30:08.68819497 +0000 UTC m=+2767.000545055" observedRunningTime="2026-01-27 14:30:09.131215684 +0000 UTC m=+2767.443565779" watchObservedRunningTime="2026-01-27 14:30:09.147389626 +0000 UTC m=+2767.459739711" Jan 27 14:30:11 crc kubenswrapper[4914]: I0127 14:30:11.118677 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvdwq" event={"ID":"bb40c293-9f54-44fe-b48c-c58f30f18fdb","Type":"ContainerStarted","Data":"bc795313649790335b5ebede5b3bea71fea27eceaf9243517eb2346a9d8df560"} Jan 27 14:30:14 crc kubenswrapper[4914]: I0127 14:30:14.694401 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:14 crc kubenswrapper[4914]: I0127 14:30:14.694795 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:14 crc kubenswrapper[4914]: I0127 14:30:14.789201 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:15 crc kubenswrapper[4914]: I0127 14:30:15.155781 4914 generic.go:334] "Generic (PLEG): container finished" podID="bb40c293-9f54-44fe-b48c-c58f30f18fdb" containerID="bc795313649790335b5ebede5b3bea71fea27eceaf9243517eb2346a9d8df560" exitCode=0 Jan 27 14:30:15 crc kubenswrapper[4914]: I0127 14:30:15.155879 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvdwq" event={"ID":"bb40c293-9f54-44fe-b48c-c58f30f18fdb","Type":"ContainerDied","Data":"bc795313649790335b5ebede5b3bea71fea27eceaf9243517eb2346a9d8df560"} Jan 27 14:30:15 crc kubenswrapper[4914]: I0127 14:30:15.207715 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:15 crc kubenswrapper[4914]: I0127 14:30:15.384693 4914 scope.go:117] "RemoveContainer" containerID="62db313e514cc3ffb05a45edec75a30cb5e2c42f2aace1f48819576419125f98" Jan 27 14:30:15 crc kubenswrapper[4914]: I0127 14:30:15.753796 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t8qdz"] Jan 27 14:30:17 crc kubenswrapper[4914]: I0127 14:30:17.174979 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t8qdz" podUID="61775761-ed28-4621-9177-b9e704d46cac" containerName="registry-server" containerID="cri-o://aac11c7192f358dea2c8ba2cd0ee7ed723443fe36ae9292abc33c33a456bb6ff" gracePeriod=2 Jan 27 14:30:18 crc kubenswrapper[4914]: I0127 14:30:18.187541 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvdwq" event={"ID":"bb40c293-9f54-44fe-b48c-c58f30f18fdb","Type":"ContainerStarted","Data":"137356f4536a43d72444f559c7850827ca6bfaa07f60182c278ee4b0733c843e"} Jan 27 14:30:18 crc kubenswrapper[4914]: I0127 14:30:18.191045 4914 generic.go:334] "Generic (PLEG): container finished" podID="61775761-ed28-4621-9177-b9e704d46cac" containerID="aac11c7192f358dea2c8ba2cd0ee7ed723443fe36ae9292abc33c33a456bb6ff" exitCode=0 Jan 27 14:30:18 crc kubenswrapper[4914]: I0127 14:30:18.191102 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8qdz" event={"ID":"61775761-ed28-4621-9177-b9e704d46cac","Type":"ContainerDied","Data":"aac11c7192f358dea2c8ba2cd0ee7ed723443fe36ae9292abc33c33a456bb6ff"} Jan 27 14:30:18 crc kubenswrapper[4914]: I0127 14:30:18.212747 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vvdwq" podStartSLOduration=2.837805663 podStartE2EDuration="11.212724785s" podCreationTimestamp="2026-01-27 14:30:07 +0000 UTC" firstStartedPulling="2026-01-27 14:30:09.097693019 +0000 UTC m=+2767.410043104" lastFinishedPulling="2026-01-27 14:30:17.472612141 +0000 UTC m=+2775.784962226" observedRunningTime="2026-01-27 14:30:18.210031291 +0000 UTC m=+2776.522381396" watchObservedRunningTime="2026-01-27 14:30:18.212724785 +0000 UTC m=+2776.525074870" Jan 27 14:30:18 crc kubenswrapper[4914]: I0127 14:30:18.432766 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:18 crc kubenswrapper[4914]: I0127 14:30:18.509595 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61775761-ed28-4621-9177-b9e704d46cac-catalog-content\") pod \"61775761-ed28-4621-9177-b9e704d46cac\" (UID: \"61775761-ed28-4621-9177-b9e704d46cac\") " Jan 27 14:30:18 crc kubenswrapper[4914]: I0127 14:30:18.509854 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lk2w\" (UniqueName: \"kubernetes.io/projected/61775761-ed28-4621-9177-b9e704d46cac-kube-api-access-5lk2w\") pod \"61775761-ed28-4621-9177-b9e704d46cac\" (UID: \"61775761-ed28-4621-9177-b9e704d46cac\") " Jan 27 14:30:18 crc kubenswrapper[4914]: I0127 14:30:18.509956 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61775761-ed28-4621-9177-b9e704d46cac-utilities\") pod \"61775761-ed28-4621-9177-b9e704d46cac\" (UID: \"61775761-ed28-4621-9177-b9e704d46cac\") " Jan 27 14:30:18 crc kubenswrapper[4914]: I0127 14:30:18.510899 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61775761-ed28-4621-9177-b9e704d46cac-utilities" (OuterVolumeSpecName: "utilities") pod "61775761-ed28-4621-9177-b9e704d46cac" (UID: "61775761-ed28-4621-9177-b9e704d46cac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:30:18 crc kubenswrapper[4914]: I0127 14:30:18.519327 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61775761-ed28-4621-9177-b9e704d46cac-kube-api-access-5lk2w" (OuterVolumeSpecName: "kube-api-access-5lk2w") pod "61775761-ed28-4621-9177-b9e704d46cac" (UID: "61775761-ed28-4621-9177-b9e704d46cac"). InnerVolumeSpecName "kube-api-access-5lk2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:18 crc kubenswrapper[4914]: I0127 14:30:18.563140 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61775761-ed28-4621-9177-b9e704d46cac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61775761-ed28-4621-9177-b9e704d46cac" (UID: "61775761-ed28-4621-9177-b9e704d46cac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:30:18 crc kubenswrapper[4914]: I0127 14:30:18.612408 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lk2w\" (UniqueName: \"kubernetes.io/projected/61775761-ed28-4621-9177-b9e704d46cac-kube-api-access-5lk2w\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:18 crc kubenswrapper[4914]: I0127 14:30:18.612447 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61775761-ed28-4621-9177-b9e704d46cac-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:18 crc kubenswrapper[4914]: I0127 14:30:18.612458 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61775761-ed28-4621-9177-b9e704d46cac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:19 crc kubenswrapper[4914]: I0127 14:30:19.201584 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8qdz" event={"ID":"61775761-ed28-4621-9177-b9e704d46cac","Type":"ContainerDied","Data":"7b44723381ffd807aece1548473e9f33da62413a043708139205033b5b9d7c2e"} Jan 27 14:30:19 crc kubenswrapper[4914]: I0127 14:30:19.201632 4914 scope.go:117] "RemoveContainer" containerID="aac11c7192f358dea2c8ba2cd0ee7ed723443fe36ae9292abc33c33a456bb6ff" Jan 27 14:30:19 crc kubenswrapper[4914]: I0127 14:30:19.201652 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8qdz" Jan 27 14:30:19 crc kubenswrapper[4914]: I0127 14:30:19.227343 4914 scope.go:117] "RemoveContainer" containerID="30c1b8e50c9afe499cd92cbd581f9c5f5b18b0d2090e7538667048b0000304da" Jan 27 14:30:19 crc kubenswrapper[4914]: I0127 14:30:19.247984 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t8qdz"] Jan 27 14:30:19 crc kubenswrapper[4914]: I0127 14:30:19.258393 4914 scope.go:117] "RemoveContainer" containerID="1431e481a2486b03b2aaff64e5490f23b9f683abe8593dff8baa9b2cd5443170" Jan 27 14:30:19 crc kubenswrapper[4914]: I0127 14:30:19.260783 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t8qdz"] Jan 27 14:30:20 crc kubenswrapper[4914]: I0127 14:30:20.306816 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61775761-ed28-4621-9177-b9e704d46cac" path="/var/lib/kubelet/pods/61775761-ed28-4621-9177-b9e704d46cac/volumes" Jan 27 14:30:27 crc kubenswrapper[4914]: I0127 14:30:27.891444 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:27 crc kubenswrapper[4914]: I0127 14:30:27.892452 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:27 crc kubenswrapper[4914]: I0127 14:30:27.943018 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:28 crc kubenswrapper[4914]: I0127 14:30:28.334316 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:28 crc kubenswrapper[4914]: I0127 14:30:28.380209 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vvdwq"] Jan 27 14:30:29 crc kubenswrapper[4914]: I0127 14:30:29.301292 4914 generic.go:334] "Generic (PLEG): container finished" podID="2f1076b4-75fa-4313-b8f7-7071b8da40d2" containerID="5e51a76908246f6ac11af6a11771fd4b2b4899f2869215c07d3d04daa01c5625" exitCode=0 Jan 27 14:30:29 crc kubenswrapper[4914]: I0127 14:30:29.301410 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" event={"ID":"2f1076b4-75fa-4313-b8f7-7071b8da40d2","Type":"ContainerDied","Data":"5e51a76908246f6ac11af6a11771fd4b2b4899f2869215c07d3d04daa01c5625"} Jan 27 14:30:30 crc kubenswrapper[4914]: I0127 14:30:30.309233 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vvdwq" podUID="bb40c293-9f54-44fe-b48c-c58f30f18fdb" containerName="registry-server" containerID="cri-o://137356f4536a43d72444f559c7850827ca6bfaa07f60182c278ee4b0733c843e" gracePeriod=2 Jan 27 14:30:30 crc kubenswrapper[4914]: I0127 14:30:30.958996 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:30:30 crc kubenswrapper[4914]: I0127 14:30:30.965592 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.080665 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-combined-ca-bundle\") pod \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.080897 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb40c293-9f54-44fe-b48c-c58f30f18fdb-utilities\") pod \"bb40c293-9f54-44fe-b48c-c58f30f18fdb\" (UID: \"bb40c293-9f54-44fe-b48c-c58f30f18fdb\") " Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.081071 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-extra-config-0\") pod \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.081125 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-migration-ssh-key-0\") pod \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.081184 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-ssh-key-openstack-edpm-ipam\") pod \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.081229 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-inventory\") pod \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.081284 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-cell1-compute-config-1\") pod \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.081328 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22k6x\" (UniqueName: \"kubernetes.io/projected/bb40c293-9f54-44fe-b48c-c58f30f18fdb-kube-api-access-22k6x\") pod \"bb40c293-9f54-44fe-b48c-c58f30f18fdb\" (UID: \"bb40c293-9f54-44fe-b48c-c58f30f18fdb\") " Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.081561 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-migration-ssh-key-1\") pod \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.081590 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb40c293-9f54-44fe-b48c-c58f30f18fdb-catalog-content\") pod \"bb40c293-9f54-44fe-b48c-c58f30f18fdb\" (UID: \"bb40c293-9f54-44fe-b48c-c58f30f18fdb\") " Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.081629 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-cell1-compute-config-0\") pod \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.081670 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvsfd\" (UniqueName: \"kubernetes.io/projected/2f1076b4-75fa-4313-b8f7-7071b8da40d2-kube-api-access-fvsfd\") pod \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\" (UID: \"2f1076b4-75fa-4313-b8f7-7071b8da40d2\") " Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.082667 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb40c293-9f54-44fe-b48c-c58f30f18fdb-utilities" (OuterVolumeSpecName: "utilities") pod "bb40c293-9f54-44fe-b48c-c58f30f18fdb" (UID: "bb40c293-9f54-44fe-b48c-c58f30f18fdb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.090536 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f1076b4-75fa-4313-b8f7-7071b8da40d2-kube-api-access-fvsfd" (OuterVolumeSpecName: "kube-api-access-fvsfd") pod "2f1076b4-75fa-4313-b8f7-7071b8da40d2" (UID: "2f1076b4-75fa-4313-b8f7-7071b8da40d2"). InnerVolumeSpecName "kube-api-access-fvsfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.090914 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb40c293-9f54-44fe-b48c-c58f30f18fdb-kube-api-access-22k6x" (OuterVolumeSpecName: "kube-api-access-22k6x") pod "bb40c293-9f54-44fe-b48c-c58f30f18fdb" (UID: "bb40c293-9f54-44fe-b48c-c58f30f18fdb"). InnerVolumeSpecName "kube-api-access-22k6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.098067 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2f1076b4-75fa-4313-b8f7-7071b8da40d2" (UID: "2f1076b4-75fa-4313-b8f7-7071b8da40d2"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.112749 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-inventory" (OuterVolumeSpecName: "inventory") pod "2f1076b4-75fa-4313-b8f7-7071b8da40d2" (UID: "2f1076b4-75fa-4313-b8f7-7071b8da40d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.118479 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2f1076b4-75fa-4313-b8f7-7071b8da40d2" (UID: "2f1076b4-75fa-4313-b8f7-7071b8da40d2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.122104 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "2f1076b4-75fa-4313-b8f7-7071b8da40d2" (UID: "2f1076b4-75fa-4313-b8f7-7071b8da40d2"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.132073 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "2f1076b4-75fa-4313-b8f7-7071b8da40d2" (UID: "2f1076b4-75fa-4313-b8f7-7071b8da40d2"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.132411 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "2f1076b4-75fa-4313-b8f7-7071b8da40d2" (UID: "2f1076b4-75fa-4313-b8f7-7071b8da40d2"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.133706 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "2f1076b4-75fa-4313-b8f7-7071b8da40d2" (UID: "2f1076b4-75fa-4313-b8f7-7071b8da40d2"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.134644 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "2f1076b4-75fa-4313-b8f7-7071b8da40d2" (UID: "2f1076b4-75fa-4313-b8f7-7071b8da40d2"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.184442 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb40c293-9f54-44fe-b48c-c58f30f18fdb-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.184497 4914 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.184509 4914 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.184519 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.184529 4914 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.184537 4914 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.184546 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22k6x\" (UniqueName: \"kubernetes.io/projected/bb40c293-9f54-44fe-b48c-c58f30f18fdb-kube-api-access-22k6x\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.184573 4914 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.184607 4914 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.184615 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvsfd\" (UniqueName: \"kubernetes.io/projected/2f1076b4-75fa-4313-b8f7-7071b8da40d2-kube-api-access-fvsfd\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.184623 4914 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f1076b4-75fa-4313-b8f7-7071b8da40d2-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.238246 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb40c293-9f54-44fe-b48c-c58f30f18fdb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb40c293-9f54-44fe-b48c-c58f30f18fdb" (UID: "bb40c293-9f54-44fe-b48c-c58f30f18fdb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.287125 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb40c293-9f54-44fe-b48c-c58f30f18fdb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.323016 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" event={"ID":"2f1076b4-75fa-4313-b8f7-7071b8da40d2","Type":"ContainerDied","Data":"7539d8aa72645553a0bd214e3c232c124cec444a63ae8d83af5a0af4d684426c"} Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.323037 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qbcgd" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.323058 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7539d8aa72645553a0bd214e3c232c124cec444a63ae8d83af5a0af4d684426c" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.326639 4914 generic.go:334] "Generic (PLEG): container finished" podID="bb40c293-9f54-44fe-b48c-c58f30f18fdb" containerID="137356f4536a43d72444f559c7850827ca6bfaa07f60182c278ee4b0733c843e" exitCode=0 Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.326685 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvdwq" event={"ID":"bb40c293-9f54-44fe-b48c-c58f30f18fdb","Type":"ContainerDied","Data":"137356f4536a43d72444f559c7850827ca6bfaa07f60182c278ee4b0733c843e"} Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.326713 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvdwq" event={"ID":"bb40c293-9f54-44fe-b48c-c58f30f18fdb","Type":"ContainerDied","Data":"c6dbcafc921209d3db2452042fcb08aebc8d3f632896d3d0b40733698dfa9bae"} Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.326733 4914 scope.go:117] "RemoveContainer" containerID="137356f4536a43d72444f559c7850827ca6bfaa07f60182c278ee4b0733c843e" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.326792 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvdwq" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.367865 4914 scope.go:117] "RemoveContainer" containerID="bc795313649790335b5ebede5b3bea71fea27eceaf9243517eb2346a9d8df560" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.374818 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vvdwq"] Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.382376 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vvdwq"] Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.397157 4914 scope.go:117] "RemoveContainer" containerID="196fdf83c9feba03823bfab2533d3638cfeeb48138f246b400fc38656d6e315e" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.427960 4914 scope.go:117] "RemoveContainer" containerID="137356f4536a43d72444f559c7850827ca6bfaa07f60182c278ee4b0733c843e" Jan 27 14:30:31 crc kubenswrapper[4914]: E0127 14:30:31.431082 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"137356f4536a43d72444f559c7850827ca6bfaa07f60182c278ee4b0733c843e\": container with ID starting with 137356f4536a43d72444f559c7850827ca6bfaa07f60182c278ee4b0733c843e not found: ID does not exist" containerID="137356f4536a43d72444f559c7850827ca6bfaa07f60182c278ee4b0733c843e" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.431145 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137356f4536a43d72444f559c7850827ca6bfaa07f60182c278ee4b0733c843e"} err="failed to get container status \"137356f4536a43d72444f559c7850827ca6bfaa07f60182c278ee4b0733c843e\": rpc error: code = NotFound desc = could not find container \"137356f4536a43d72444f559c7850827ca6bfaa07f60182c278ee4b0733c843e\": container with ID starting with 137356f4536a43d72444f559c7850827ca6bfaa07f60182c278ee4b0733c843e not found: ID does not exist" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.431186 4914 scope.go:117] "RemoveContainer" containerID="bc795313649790335b5ebede5b3bea71fea27eceaf9243517eb2346a9d8df560" Jan 27 14:30:31 crc kubenswrapper[4914]: E0127 14:30:31.434162 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc795313649790335b5ebede5b3bea71fea27eceaf9243517eb2346a9d8df560\": container with ID starting with bc795313649790335b5ebede5b3bea71fea27eceaf9243517eb2346a9d8df560 not found: ID does not exist" containerID="bc795313649790335b5ebede5b3bea71fea27eceaf9243517eb2346a9d8df560" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.434215 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc795313649790335b5ebede5b3bea71fea27eceaf9243517eb2346a9d8df560"} err="failed to get container status \"bc795313649790335b5ebede5b3bea71fea27eceaf9243517eb2346a9d8df560\": rpc error: code = NotFound desc = could not find container \"bc795313649790335b5ebede5b3bea71fea27eceaf9243517eb2346a9d8df560\": container with ID starting with bc795313649790335b5ebede5b3bea71fea27eceaf9243517eb2346a9d8df560 not found: ID does not exist" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.434258 4914 scope.go:117] "RemoveContainer" containerID="196fdf83c9feba03823bfab2533d3638cfeeb48138f246b400fc38656d6e315e" Jan 27 14:30:31 crc kubenswrapper[4914]: E0127 14:30:31.434693 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"196fdf83c9feba03823bfab2533d3638cfeeb48138f246b400fc38656d6e315e\": container with ID starting with 196fdf83c9feba03823bfab2533d3638cfeeb48138f246b400fc38656d6e315e not found: ID does not exist" containerID="196fdf83c9feba03823bfab2533d3638cfeeb48138f246b400fc38656d6e315e" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.434754 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"196fdf83c9feba03823bfab2533d3638cfeeb48138f246b400fc38656d6e315e"} err="failed to get container status \"196fdf83c9feba03823bfab2533d3638cfeeb48138f246b400fc38656d6e315e\": rpc error: code = NotFound desc = could not find container \"196fdf83c9feba03823bfab2533d3638cfeeb48138f246b400fc38656d6e315e\": container with ID starting with 196fdf83c9feba03823bfab2533d3638cfeeb48138f246b400fc38656d6e315e not found: ID does not exist" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.443042 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj"] Jan 27 14:30:31 crc kubenswrapper[4914]: E0127 14:30:31.443462 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb40c293-9f54-44fe-b48c-c58f30f18fdb" containerName="registry-server" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.443479 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb40c293-9f54-44fe-b48c-c58f30f18fdb" containerName="registry-server" Jan 27 14:30:31 crc kubenswrapper[4914]: E0127 14:30:31.443499 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61775761-ed28-4621-9177-b9e704d46cac" containerName="registry-server" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.443509 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="61775761-ed28-4621-9177-b9e704d46cac" containerName="registry-server" Jan 27 14:30:31 crc kubenswrapper[4914]: E0127 14:30:31.443527 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f1076b4-75fa-4313-b8f7-7071b8da40d2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.443535 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f1076b4-75fa-4313-b8f7-7071b8da40d2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 14:30:31 crc kubenswrapper[4914]: E0127 14:30:31.443552 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61775761-ed28-4621-9177-b9e704d46cac" containerName="extract-content" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.443558 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="61775761-ed28-4621-9177-b9e704d46cac" containerName="extract-content" Jan 27 14:30:31 crc kubenswrapper[4914]: E0127 14:30:31.443568 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb40c293-9f54-44fe-b48c-c58f30f18fdb" containerName="extract-utilities" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.443578 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb40c293-9f54-44fe-b48c-c58f30f18fdb" containerName="extract-utilities" Jan 27 14:30:31 crc kubenswrapper[4914]: E0127 14:30:31.443600 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61775761-ed28-4621-9177-b9e704d46cac" containerName="extract-utilities" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.443607 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="61775761-ed28-4621-9177-b9e704d46cac" containerName="extract-utilities" Jan 27 14:30:31 crc kubenswrapper[4914]: E0127 14:30:31.443623 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb40c293-9f54-44fe-b48c-c58f30f18fdb" containerName="extract-content" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.443629 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb40c293-9f54-44fe-b48c-c58f30f18fdb" containerName="extract-content" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.443803 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="61775761-ed28-4621-9177-b9e704d46cac" containerName="registry-server" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.443819 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb40c293-9f54-44fe-b48c-c58f30f18fdb" containerName="registry-server" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.443907 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f1076b4-75fa-4313-b8f7-7071b8da40d2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.444596 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.446891 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.447088 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.447327 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.447387 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.447524 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m5jxs" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.453821 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj"] Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.593059 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.593112 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.593134 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.593246 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg2vn\" (UniqueName: \"kubernetes.io/projected/82a18d6e-9d92-402d-a885-8176065c4c66-kube-api-access-fg2vn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.593332 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.593360 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.593431 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.695485 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.695560 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.695592 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.695685 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg2vn\" (UniqueName: \"kubernetes.io/projected/82a18d6e-9d92-402d-a885-8176065c4c66-kube-api-access-fg2vn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.695752 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.695785 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.695851 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.700772 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.701265 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.712079 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.712926 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.713680 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.715119 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.718846 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg2vn\" (UniqueName: \"kubernetes.io/projected/82a18d6e-9d92-402d-a885-8176065c4c66-kube-api-access-fg2vn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:31 crc kubenswrapper[4914]: I0127 14:30:31.808322 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:30:32 crc kubenswrapper[4914]: I0127 14:30:32.303970 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb40c293-9f54-44fe-b48c-c58f30f18fdb" path="/var/lib/kubelet/pods/bb40c293-9f54-44fe-b48c-c58f30f18fdb/volumes" Jan 27 14:30:32 crc kubenswrapper[4914]: W0127 14:30:32.387114 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82a18d6e_9d92_402d_a885_8176065c4c66.slice/crio-d0d1ddce6b545f2f972cfeaf6d131d437be13f360cb42b024734700675136ef8 WatchSource:0}: Error finding container d0d1ddce6b545f2f972cfeaf6d131d437be13f360cb42b024734700675136ef8: Status 404 returned error can't find the container with id d0d1ddce6b545f2f972cfeaf6d131d437be13f360cb42b024734700675136ef8 Jan 27 14:30:32 crc kubenswrapper[4914]: I0127 14:30:32.390480 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj"] Jan 27 14:30:33 crc kubenswrapper[4914]: I0127 14:30:33.345675 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" event={"ID":"82a18d6e-9d92-402d-a885-8176065c4c66","Type":"ContainerStarted","Data":"a900d3b9ebc7b11e88b2eff9483711d4c7dbd6aaa922c65c59ffb15b0f5b45db"} Jan 27 14:30:33 crc kubenswrapper[4914]: I0127 14:30:33.346289 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" event={"ID":"82a18d6e-9d92-402d-a885-8176065c4c66","Type":"ContainerStarted","Data":"d0d1ddce6b545f2f972cfeaf6d131d437be13f360cb42b024734700675136ef8"} Jan 27 14:30:33 crc kubenswrapper[4914]: I0127 14:30:33.405352 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" podStartSLOduration=1.786888496 podStartE2EDuration="2.405335247s" podCreationTimestamp="2026-01-27 14:30:31 +0000 UTC" firstStartedPulling="2026-01-27 14:30:32.390019721 +0000 UTC m=+2790.702369826" lastFinishedPulling="2026-01-27 14:30:33.008466472 +0000 UTC m=+2791.320816577" observedRunningTime="2026-01-27 14:30:33.401603495 +0000 UTC m=+2791.713953590" watchObservedRunningTime="2026-01-27 14:30:33.405335247 +0000 UTC m=+2791.717685332" Jan 27 14:30:37 crc kubenswrapper[4914]: I0127 14:30:37.690515 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:30:37 crc kubenswrapper[4914]: I0127 14:30:37.691086 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:30:37 crc kubenswrapper[4914]: I0127 14:30:37.691138 4914 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 14:30:37 crc kubenswrapper[4914]: I0127 14:30:37.691753 4914 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3642bb8730ee946e0fc1bb6288bad6c73ce709eb88f502af6460b4dc68554a20"} pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:30:37 crc kubenswrapper[4914]: I0127 14:30:37.691804 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" containerID="cri-o://3642bb8730ee946e0fc1bb6288bad6c73ce709eb88f502af6460b4dc68554a20" gracePeriod=600 Jan 27 14:30:38 crc kubenswrapper[4914]: I0127 14:30:38.401425 4914 generic.go:334] "Generic (PLEG): container finished" podID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerID="3642bb8730ee946e0fc1bb6288bad6c73ce709eb88f502af6460b4dc68554a20" exitCode=0 Jan 27 14:30:38 crc kubenswrapper[4914]: I0127 14:30:38.401497 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerDied","Data":"3642bb8730ee946e0fc1bb6288bad6c73ce709eb88f502af6460b4dc68554a20"} Jan 27 14:30:38 crc kubenswrapper[4914]: I0127 14:30:38.402096 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerStarted","Data":"db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12"} Jan 27 14:30:38 crc kubenswrapper[4914]: I0127 14:30:38.402116 4914 scope.go:117] "RemoveContainer" containerID="bd7cac352691bd65d38724342f77a2456fa5ddc348e9249545dd6490f8a773dc" Jan 27 14:33:07 crc kubenswrapper[4914]: I0127 14:33:07.690978 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:33:07 crc kubenswrapper[4914]: I0127 14:33:07.691583 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:33:10 crc kubenswrapper[4914]: I0127 14:33:10.812042 4914 generic.go:334] "Generic (PLEG): container finished" podID="82a18d6e-9d92-402d-a885-8176065c4c66" containerID="a900d3b9ebc7b11e88b2eff9483711d4c7dbd6aaa922c65c59ffb15b0f5b45db" exitCode=0 Jan 27 14:33:10 crc kubenswrapper[4914]: I0127 14:33:10.812155 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" event={"ID":"82a18d6e-9d92-402d-a885-8176065c4c66","Type":"ContainerDied","Data":"a900d3b9ebc7b11e88b2eff9483711d4c7dbd6aaa922c65c59ffb15b0f5b45db"} Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.188325 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.281004 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-inventory\") pod \"82a18d6e-9d92-402d-a885-8176065c4c66\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.281106 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg2vn\" (UniqueName: \"kubernetes.io/projected/82a18d6e-9d92-402d-a885-8176065c4c66-kube-api-access-fg2vn\") pod \"82a18d6e-9d92-402d-a885-8176065c4c66\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.281197 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-2\") pod \"82a18d6e-9d92-402d-a885-8176065c4c66\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.281298 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-telemetry-combined-ca-bundle\") pod \"82a18d6e-9d92-402d-a885-8176065c4c66\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.281351 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ssh-key-openstack-edpm-ipam\") pod \"82a18d6e-9d92-402d-a885-8176065c4c66\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.281401 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-1\") pod \"82a18d6e-9d92-402d-a885-8176065c4c66\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.281520 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-0\") pod \"82a18d6e-9d92-402d-a885-8176065c4c66\" (UID: \"82a18d6e-9d92-402d-a885-8176065c4c66\") " Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.289286 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "82a18d6e-9d92-402d-a885-8176065c4c66" (UID: "82a18d6e-9d92-402d-a885-8176065c4c66"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.290229 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a18d6e-9d92-402d-a885-8176065c4c66-kube-api-access-fg2vn" (OuterVolumeSpecName: "kube-api-access-fg2vn") pod "82a18d6e-9d92-402d-a885-8176065c4c66" (UID: "82a18d6e-9d92-402d-a885-8176065c4c66"). InnerVolumeSpecName "kube-api-access-fg2vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.313054 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "82a18d6e-9d92-402d-a885-8176065c4c66" (UID: "82a18d6e-9d92-402d-a885-8176065c4c66"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.313526 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "82a18d6e-9d92-402d-a885-8176065c4c66" (UID: "82a18d6e-9d92-402d-a885-8176065c4c66"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.315394 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "82a18d6e-9d92-402d-a885-8176065c4c66" (UID: "82a18d6e-9d92-402d-a885-8176065c4c66"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.327444 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-inventory" (OuterVolumeSpecName: "inventory") pod "82a18d6e-9d92-402d-a885-8176065c4c66" (UID: "82a18d6e-9d92-402d-a885-8176065c4c66"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.329398 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "82a18d6e-9d92-402d-a885-8176065c4c66" (UID: "82a18d6e-9d92-402d-a885-8176065c4c66"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.384108 4914 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.384152 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg2vn\" (UniqueName: \"kubernetes.io/projected/82a18d6e-9d92-402d-a885-8176065c4c66-kube-api-access-fg2vn\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.384168 4914 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.384179 4914 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.384189 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.384197 4914 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.384205 4914 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/82a18d6e-9d92-402d-a885-8176065c4c66-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.832628 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" event={"ID":"82a18d6e-9d92-402d-a885-8176065c4c66","Type":"ContainerDied","Data":"d0d1ddce6b545f2f972cfeaf6d131d437be13f360cb42b024734700675136ef8"} Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.832690 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0d1ddce6b545f2f972cfeaf6d131d437be13f360cb42b024734700675136ef8" Jan 27 14:33:12 crc kubenswrapper[4914]: I0127 14:33:12.832701 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj" Jan 27 14:33:37 crc kubenswrapper[4914]: I0127 14:33:37.690748 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:33:37 crc kubenswrapper[4914]: I0127 14:33:37.691333 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:34:07 crc kubenswrapper[4914]: I0127 14:34:07.691351 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:34:07 crc kubenswrapper[4914]: I0127 14:34:07.691765 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:34:07 crc kubenswrapper[4914]: I0127 14:34:07.691815 4914 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 14:34:07 crc kubenswrapper[4914]: I0127 14:34:07.692457 4914 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12"} pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:34:07 crc kubenswrapper[4914]: I0127 14:34:07.692509 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" containerID="cri-o://db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" gracePeriod=600 Jan 27 14:34:07 crc kubenswrapper[4914]: E0127 14:34:07.810676 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:34:08 crc kubenswrapper[4914]: I0127 14:34:08.330345 4914 generic.go:334] "Generic (PLEG): container finished" podID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" exitCode=0 Jan 27 14:34:08 crc kubenswrapper[4914]: I0127 14:34:08.330380 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerDied","Data":"db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12"} Jan 27 14:34:08 crc kubenswrapper[4914]: I0127 14:34:08.330410 4914 scope.go:117] "RemoveContainer" containerID="3642bb8730ee946e0fc1bb6288bad6c73ce709eb88f502af6460b4dc68554a20" Jan 27 14:34:08 crc kubenswrapper[4914]: I0127 14:34:08.331424 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:34:08 crc kubenswrapper[4914]: E0127 14:34:08.331731 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:34:09 crc kubenswrapper[4914]: I0127 14:34:09.903129 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 14:34:09 crc kubenswrapper[4914]: E0127 14:34:09.904775 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a18d6e-9d92-402d-a885-8176065c4c66" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 14:34:09 crc kubenswrapper[4914]: I0127 14:34:09.904800 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a18d6e-9d92-402d-a885-8176065c4c66" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 14:34:09 crc kubenswrapper[4914]: I0127 14:34:09.905087 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="82a18d6e-9d92-402d-a885-8176065c4c66" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 14:34:09 crc kubenswrapper[4914]: I0127 14:34:09.905939 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 14:34:09 crc kubenswrapper[4914]: I0127 14:34:09.909539 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 27 14:34:09 crc kubenswrapper[4914]: I0127 14:34:09.909732 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 27 14:34:09 crc kubenswrapper[4914]: I0127 14:34:09.909922 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 14:34:09 crc kubenswrapper[4914]: I0127 14:34:09.910042 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-scpv4" Jan 27 14:34:09 crc kubenswrapper[4914]: I0127 14:34:09.934651 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.005494 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b94746b2-3335-4991-952f-fe8ec53a24b8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.005956 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b94746b2-3335-4991-952f-fe8ec53a24b8-config-data\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.005996 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.006017 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.006052 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.006282 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkt76\" (UniqueName: \"kubernetes.io/projected/b94746b2-3335-4991-952f-fe8ec53a24b8-kube-api-access-qkt76\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.006480 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b94746b2-3335-4991-952f-fe8ec53a24b8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.006547 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b94746b2-3335-4991-952f-fe8ec53a24b8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.006600 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.108586 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b94746b2-3335-4991-952f-fe8ec53a24b8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.108699 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b94746b2-3335-4991-952f-fe8ec53a24b8-config-data\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.108722 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.108737 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.108757 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.108791 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkt76\" (UniqueName: \"kubernetes.io/projected/b94746b2-3335-4991-952f-fe8ec53a24b8-kube-api-access-qkt76\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.108846 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b94746b2-3335-4991-952f-fe8ec53a24b8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.108872 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b94746b2-3335-4991-952f-fe8ec53a24b8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.108891 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.110264 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.110991 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b94746b2-3335-4991-952f-fe8ec53a24b8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.111191 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b94746b2-3335-4991-952f-fe8ec53a24b8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.112170 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b94746b2-3335-4991-952f-fe8ec53a24b8-config-data\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.113234 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b94746b2-3335-4991-952f-fe8ec53a24b8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.115369 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.116566 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.125757 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.133566 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkt76\" (UniqueName: \"kubernetes.io/projected/b94746b2-3335-4991-952f-fe8ec53a24b8-kube-api-access-qkt76\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.141473 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.233888 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.827302 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 14:34:10 crc kubenswrapper[4914]: I0127 14:34:10.841309 4914 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:34:11 crc kubenswrapper[4914]: I0127 14:34:11.359788 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b94746b2-3335-4991-952f-fe8ec53a24b8","Type":"ContainerStarted","Data":"af00179c30a5ef342f3270e01deb384d793c9f81b0c582318a430ca2fae7c575"} Jan 27 14:34:21 crc kubenswrapper[4914]: I0127 14:34:21.294586 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:34:21 crc kubenswrapper[4914]: E0127 14:34:21.295314 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:34:35 crc kubenswrapper[4914]: I0127 14:34:35.294795 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:34:35 crc kubenswrapper[4914]: E0127 14:34:35.295681 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:34:44 crc kubenswrapper[4914]: E0127 14:34:44.090695 4914 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 27 14:34:44 crc kubenswrapper[4914]: E0127 14:34:44.091464 4914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkt76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(b94746b2-3335-4991-952f-fe8ec53a24b8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:34:44 crc kubenswrapper[4914]: E0127 14:34:44.092633 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="b94746b2-3335-4991-952f-fe8ec53a24b8" Jan 27 14:34:44 crc kubenswrapper[4914]: E0127 14:34:44.680169 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="b94746b2-3335-4991-952f-fe8ec53a24b8" Jan 27 14:34:49 crc kubenswrapper[4914]: I0127 14:34:49.294676 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:34:49 crc kubenswrapper[4914]: E0127 14:34:49.295488 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:35:00 crc kubenswrapper[4914]: I0127 14:35:00.808574 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b94746b2-3335-4991-952f-fe8ec53a24b8","Type":"ContainerStarted","Data":"02def17c938b477b615f3b6d3716d0067c201489aa19603f5aafeafe1611cf78"} Jan 27 14:35:00 crc kubenswrapper[4914]: I0127 14:35:00.830241 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.794073856 podStartE2EDuration="52.830218978s" podCreationTimestamp="2026-01-27 14:34:08 +0000 UTC" firstStartedPulling="2026-01-27 14:34:10.841061313 +0000 UTC m=+3009.153411398" lastFinishedPulling="2026-01-27 14:34:58.877206415 +0000 UTC m=+3057.189556520" observedRunningTime="2026-01-27 14:35:00.825034317 +0000 UTC m=+3059.137384412" watchObservedRunningTime="2026-01-27 14:35:00.830218978 +0000 UTC m=+3059.142569083" Jan 27 14:35:04 crc kubenswrapper[4914]: I0127 14:35:04.294575 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:35:04 crc kubenswrapper[4914]: E0127 14:35:04.295503 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:35:16 crc kubenswrapper[4914]: I0127 14:35:16.321753 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:35:16 crc kubenswrapper[4914]: E0127 14:35:16.322602 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:35:18 crc kubenswrapper[4914]: I0127 14:35:18.978912 4914 generic.go:334] "Generic (PLEG): container finished" podID="b94746b2-3335-4991-952f-fe8ec53a24b8" containerID="02def17c938b477b615f3b6d3716d0067c201489aa19603f5aafeafe1611cf78" exitCode=100 Jan 27 14:35:18 crc kubenswrapper[4914]: I0127 14:35:18.978981 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b94746b2-3335-4991-952f-fe8ec53a24b8","Type":"ContainerDied","Data":"02def17c938b477b615f3b6d3716d0067c201489aa19603f5aafeafe1611cf78"} Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.482106 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.562902 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b94746b2-3335-4991-952f-fe8ec53a24b8-test-operator-ephemeral-workdir\") pod \"b94746b2-3335-4991-952f-fe8ec53a24b8\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.562946 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-ca-certs\") pod \"b94746b2-3335-4991-952f-fe8ec53a24b8\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.562976 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-openstack-config-secret\") pod \"b94746b2-3335-4991-952f-fe8ec53a24b8\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.563000 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b94746b2-3335-4991-952f-fe8ec53a24b8-openstack-config\") pod \"b94746b2-3335-4991-952f-fe8ec53a24b8\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.563041 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b94746b2-3335-4991-952f-fe8ec53a24b8-config-data\") pod \"b94746b2-3335-4991-952f-fe8ec53a24b8\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.563077 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b94746b2-3335-4991-952f-fe8ec53a24b8-test-operator-ephemeral-temporary\") pod \"b94746b2-3335-4991-952f-fe8ec53a24b8\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.563129 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"b94746b2-3335-4991-952f-fe8ec53a24b8\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.563739 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b94746b2-3335-4991-952f-fe8ec53a24b8-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "b94746b2-3335-4991-952f-fe8ec53a24b8" (UID: "b94746b2-3335-4991-952f-fe8ec53a24b8"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.564110 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b94746b2-3335-4991-952f-fe8ec53a24b8-config-data" (OuterVolumeSpecName: "config-data") pod "b94746b2-3335-4991-952f-fe8ec53a24b8" (UID: "b94746b2-3335-4991-952f-fe8ec53a24b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.572015 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "b94746b2-3335-4991-952f-fe8ec53a24b8" (UID: "b94746b2-3335-4991-952f-fe8ec53a24b8"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.575137 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b94746b2-3335-4991-952f-fe8ec53a24b8-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "b94746b2-3335-4991-952f-fe8ec53a24b8" (UID: "b94746b2-3335-4991-952f-fe8ec53a24b8"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.591502 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "b94746b2-3335-4991-952f-fe8ec53a24b8" (UID: "b94746b2-3335-4991-952f-fe8ec53a24b8"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.600311 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b94746b2-3335-4991-952f-fe8ec53a24b8" (UID: "b94746b2-3335-4991-952f-fe8ec53a24b8"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.616561 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b94746b2-3335-4991-952f-fe8ec53a24b8-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b94746b2-3335-4991-952f-fe8ec53a24b8" (UID: "b94746b2-3335-4991-952f-fe8ec53a24b8"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.665183 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkt76\" (UniqueName: \"kubernetes.io/projected/b94746b2-3335-4991-952f-fe8ec53a24b8-kube-api-access-qkt76\") pod \"b94746b2-3335-4991-952f-fe8ec53a24b8\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.665253 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-ssh-key\") pod \"b94746b2-3335-4991-952f-fe8ec53a24b8\" (UID: \"b94746b2-3335-4991-952f-fe8ec53a24b8\") " Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.665890 4914 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b94746b2-3335-4991-952f-fe8ec53a24b8-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.665914 4914 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.665927 4914 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.665943 4914 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b94746b2-3335-4991-952f-fe8ec53a24b8-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.665957 4914 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b94746b2-3335-4991-952f-fe8ec53a24b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.665968 4914 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b94746b2-3335-4991-952f-fe8ec53a24b8-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.666003 4914 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.669959 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b94746b2-3335-4991-952f-fe8ec53a24b8-kube-api-access-qkt76" (OuterVolumeSpecName: "kube-api-access-qkt76") pod "b94746b2-3335-4991-952f-fe8ec53a24b8" (UID: "b94746b2-3335-4991-952f-fe8ec53a24b8"). InnerVolumeSpecName "kube-api-access-qkt76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.691391 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b94746b2-3335-4991-952f-fe8ec53a24b8" (UID: "b94746b2-3335-4991-952f-fe8ec53a24b8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.694206 4914 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.768243 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkt76\" (UniqueName: \"kubernetes.io/projected/b94746b2-3335-4991-952f-fe8ec53a24b8-kube-api-access-qkt76\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.768274 4914 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b94746b2-3335-4991-952f-fe8ec53a24b8-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:20 crc kubenswrapper[4914]: I0127 14:35:20.768285 4914 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:21 crc kubenswrapper[4914]: I0127 14:35:21.005030 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b94746b2-3335-4991-952f-fe8ec53a24b8","Type":"ContainerDied","Data":"af00179c30a5ef342f3270e01deb384d793c9f81b0c582318a430ca2fae7c575"} Jan 27 14:35:21 crc kubenswrapper[4914]: I0127 14:35:21.005069 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af00179c30a5ef342f3270e01deb384d793c9f81b0c582318a430ca2fae7c575" Jan 27 14:35:21 crc kubenswrapper[4914]: I0127 14:35:21.005122 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 14:35:27 crc kubenswrapper[4914]: I0127 14:35:27.470815 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 14:35:27 crc kubenswrapper[4914]: E0127 14:35:27.473313 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b94746b2-3335-4991-952f-fe8ec53a24b8" containerName="tempest-tests-tempest-tests-runner" Jan 27 14:35:27 crc kubenswrapper[4914]: I0127 14:35:27.473451 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="b94746b2-3335-4991-952f-fe8ec53a24b8" containerName="tempest-tests-tempest-tests-runner" Jan 27 14:35:27 crc kubenswrapper[4914]: I0127 14:35:27.474069 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="b94746b2-3335-4991-952f-fe8ec53a24b8" containerName="tempest-tests-tempest-tests-runner" Jan 27 14:35:27 crc kubenswrapper[4914]: I0127 14:35:27.475395 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 14:35:27 crc kubenswrapper[4914]: I0127 14:35:27.477765 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-scpv4" Jan 27 14:35:27 crc kubenswrapper[4914]: I0127 14:35:27.491476 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 14:35:27 crc kubenswrapper[4914]: I0127 14:35:27.603037 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99hbw\" (UniqueName: \"kubernetes.io/projected/ba57797f-afb4-4ef6-9f35-3ca8e7b6bb3d-kube-api-access-99hbw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ba57797f-afb4-4ef6-9f35-3ca8e7b6bb3d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 14:35:27 crc kubenswrapper[4914]: I0127 14:35:27.603083 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ba57797f-afb4-4ef6-9f35-3ca8e7b6bb3d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 14:35:27 crc kubenswrapper[4914]: I0127 14:35:27.705169 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99hbw\" (UniqueName: \"kubernetes.io/projected/ba57797f-afb4-4ef6-9f35-3ca8e7b6bb3d-kube-api-access-99hbw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ba57797f-afb4-4ef6-9f35-3ca8e7b6bb3d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 14:35:27 crc kubenswrapper[4914]: I0127 14:35:27.705226 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ba57797f-afb4-4ef6-9f35-3ca8e7b6bb3d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 14:35:27 crc kubenswrapper[4914]: I0127 14:35:27.705594 4914 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ba57797f-afb4-4ef6-9f35-3ca8e7b6bb3d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 14:35:27 crc kubenswrapper[4914]: I0127 14:35:27.731626 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99hbw\" (UniqueName: \"kubernetes.io/projected/ba57797f-afb4-4ef6-9f35-3ca8e7b6bb3d-kube-api-access-99hbw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ba57797f-afb4-4ef6-9f35-3ca8e7b6bb3d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 14:35:27 crc kubenswrapper[4914]: I0127 14:35:27.733434 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ba57797f-afb4-4ef6-9f35-3ca8e7b6bb3d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 14:35:27 crc kubenswrapper[4914]: I0127 14:35:27.802206 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 14:35:28 crc kubenswrapper[4914]: I0127 14:35:28.241756 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 14:35:29 crc kubenswrapper[4914]: I0127 14:35:29.105391 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"ba57797f-afb4-4ef6-9f35-3ca8e7b6bb3d","Type":"ContainerStarted","Data":"259fb2fa39e6a54ab1c873aed42a35c49ba3b93415990c39b7dc15309f53ad81"} Jan 27 14:35:30 crc kubenswrapper[4914]: I0127 14:35:30.115612 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"ba57797f-afb4-4ef6-9f35-3ca8e7b6bb3d","Type":"ContainerStarted","Data":"bbd4a37dd6ef987ec263ce7171c86b2fd24bac5289c29540f9d3ca3db2c0ab5e"} Jan 27 14:35:30 crc kubenswrapper[4914]: I0127 14:35:30.137192 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.100311469 podStartE2EDuration="3.137174369s" podCreationTimestamp="2026-01-27 14:35:27 +0000 UTC" firstStartedPulling="2026-01-27 14:35:28.243854636 +0000 UTC m=+3086.556204741" lastFinishedPulling="2026-01-27 14:35:29.280717536 +0000 UTC m=+3087.593067641" observedRunningTime="2026-01-27 14:35:30.129548211 +0000 UTC m=+3088.441898296" watchObservedRunningTime="2026-01-27 14:35:30.137174369 +0000 UTC m=+3088.449524454" Jan 27 14:35:30 crc kubenswrapper[4914]: I0127 14:35:30.295242 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:35:30 crc kubenswrapper[4914]: E0127 14:35:30.295792 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:35:44 crc kubenswrapper[4914]: I0127 14:35:44.295620 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:35:44 crc kubenswrapper[4914]: E0127 14:35:44.296501 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:35:56 crc kubenswrapper[4914]: I0127 14:35:56.294897 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:35:56 crc kubenswrapper[4914]: E0127 14:35:56.295739 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:35:59 crc kubenswrapper[4914]: I0127 14:35:59.773627 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p5z5z/must-gather-mqs9g"] Jan 27 14:35:59 crc kubenswrapper[4914]: I0127 14:35:59.776253 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p5z5z/must-gather-mqs9g" Jan 27 14:35:59 crc kubenswrapper[4914]: I0127 14:35:59.795299 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-p5z5z"/"openshift-service-ca.crt" Jan 27 14:35:59 crc kubenswrapper[4914]: I0127 14:35:59.795379 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-p5z5z"/"kube-root-ca.crt" Jan 27 14:35:59 crc kubenswrapper[4914]: I0127 14:35:59.795756 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-p5z5z"/"default-dockercfg-rpfkj" Jan 27 14:35:59 crc kubenswrapper[4914]: I0127 14:35:59.808270 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p5z5z/must-gather-mqs9g"] Jan 27 14:35:59 crc kubenswrapper[4914]: I0127 14:35:59.943953 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl2pc\" (UniqueName: \"kubernetes.io/projected/d5cc861b-5221-436b-8fb2-82b729fd4334-kube-api-access-tl2pc\") pod \"must-gather-mqs9g\" (UID: \"d5cc861b-5221-436b-8fb2-82b729fd4334\") " pod="openshift-must-gather-p5z5z/must-gather-mqs9g" Jan 27 14:35:59 crc kubenswrapper[4914]: I0127 14:35:59.943999 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5cc861b-5221-436b-8fb2-82b729fd4334-must-gather-output\") pod \"must-gather-mqs9g\" (UID: \"d5cc861b-5221-436b-8fb2-82b729fd4334\") " pod="openshift-must-gather-p5z5z/must-gather-mqs9g" Jan 27 14:36:00 crc kubenswrapper[4914]: I0127 14:36:00.046692 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl2pc\" (UniqueName: \"kubernetes.io/projected/d5cc861b-5221-436b-8fb2-82b729fd4334-kube-api-access-tl2pc\") pod \"must-gather-mqs9g\" (UID: \"d5cc861b-5221-436b-8fb2-82b729fd4334\") " pod="openshift-must-gather-p5z5z/must-gather-mqs9g" Jan 27 14:36:00 crc kubenswrapper[4914]: I0127 14:36:00.046780 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5cc861b-5221-436b-8fb2-82b729fd4334-must-gather-output\") pod \"must-gather-mqs9g\" (UID: \"d5cc861b-5221-436b-8fb2-82b729fd4334\") " pod="openshift-must-gather-p5z5z/must-gather-mqs9g" Jan 27 14:36:00 crc kubenswrapper[4914]: I0127 14:36:00.047446 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5cc861b-5221-436b-8fb2-82b729fd4334-must-gather-output\") pod \"must-gather-mqs9g\" (UID: \"d5cc861b-5221-436b-8fb2-82b729fd4334\") " pod="openshift-must-gather-p5z5z/must-gather-mqs9g" Jan 27 14:36:00 crc kubenswrapper[4914]: I0127 14:36:00.068959 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl2pc\" (UniqueName: \"kubernetes.io/projected/d5cc861b-5221-436b-8fb2-82b729fd4334-kube-api-access-tl2pc\") pod \"must-gather-mqs9g\" (UID: \"d5cc861b-5221-436b-8fb2-82b729fd4334\") " pod="openshift-must-gather-p5z5z/must-gather-mqs9g" Jan 27 14:36:00 crc kubenswrapper[4914]: I0127 14:36:00.112579 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p5z5z/must-gather-mqs9g" Jan 27 14:36:00 crc kubenswrapper[4914]: I0127 14:36:00.599336 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p5z5z/must-gather-mqs9g"] Jan 27 14:36:01 crc kubenswrapper[4914]: I0127 14:36:01.403142 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p5z5z/must-gather-mqs9g" event={"ID":"d5cc861b-5221-436b-8fb2-82b729fd4334","Type":"ContainerStarted","Data":"b75a0c826a8b664b68bf3fcdc735682dc75395d88de7bec3cf6af871b8b78a73"} Jan 27 14:36:07 crc kubenswrapper[4914]: I0127 14:36:07.294978 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:36:07 crc kubenswrapper[4914]: E0127 14:36:07.296224 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:36:08 crc kubenswrapper[4914]: I0127 14:36:08.484457 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p5z5z/must-gather-mqs9g" event={"ID":"d5cc861b-5221-436b-8fb2-82b729fd4334","Type":"ContainerStarted","Data":"a67d191197532e1612b9dc961b64c8029760ac385ae61f3b522cd230b585648b"} Jan 27 14:36:08 crc kubenswrapper[4914]: I0127 14:36:08.484811 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p5z5z/must-gather-mqs9g" event={"ID":"d5cc861b-5221-436b-8fb2-82b729fd4334","Type":"ContainerStarted","Data":"b391631083b93c23aa391c5697a41b0d5ceabf4d65a48510346e65490203239a"} Jan 27 14:36:08 crc kubenswrapper[4914]: I0127 14:36:08.503679 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-p5z5z/must-gather-mqs9g" podStartSLOduration=2.102904829 podStartE2EDuration="9.50366463s" podCreationTimestamp="2026-01-27 14:35:59 +0000 UTC" firstStartedPulling="2026-01-27 14:36:00.610102174 +0000 UTC m=+3118.922452259" lastFinishedPulling="2026-01-27 14:36:08.010861965 +0000 UTC m=+3126.323212060" observedRunningTime="2026-01-27 14:36:08.500290248 +0000 UTC m=+3126.812640333" watchObservedRunningTime="2026-01-27 14:36:08.50366463 +0000 UTC m=+3126.816014715" Jan 27 14:36:11 crc kubenswrapper[4914]: I0127 14:36:11.875696 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p5z5z/crc-debug-z96rv"] Jan 27 14:36:11 crc kubenswrapper[4914]: I0127 14:36:11.877452 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p5z5z/crc-debug-z96rv" Jan 27 14:36:12 crc kubenswrapper[4914]: I0127 14:36:12.016405 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3737551-7641-4955-995e-4f493677dc8f-host\") pod \"crc-debug-z96rv\" (UID: \"b3737551-7641-4955-995e-4f493677dc8f\") " pod="openshift-must-gather-p5z5z/crc-debug-z96rv" Jan 27 14:36:12 crc kubenswrapper[4914]: I0127 14:36:12.016700 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnfrj\" (UniqueName: \"kubernetes.io/projected/b3737551-7641-4955-995e-4f493677dc8f-kube-api-access-hnfrj\") pod \"crc-debug-z96rv\" (UID: \"b3737551-7641-4955-995e-4f493677dc8f\") " pod="openshift-must-gather-p5z5z/crc-debug-z96rv" Jan 27 14:36:12 crc kubenswrapper[4914]: I0127 14:36:12.119015 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3737551-7641-4955-995e-4f493677dc8f-host\") pod \"crc-debug-z96rv\" (UID: \"b3737551-7641-4955-995e-4f493677dc8f\") " pod="openshift-must-gather-p5z5z/crc-debug-z96rv" Jan 27 14:36:12 crc kubenswrapper[4914]: I0127 14:36:12.119124 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnfrj\" (UniqueName: \"kubernetes.io/projected/b3737551-7641-4955-995e-4f493677dc8f-kube-api-access-hnfrj\") pod \"crc-debug-z96rv\" (UID: \"b3737551-7641-4955-995e-4f493677dc8f\") " pod="openshift-must-gather-p5z5z/crc-debug-z96rv" Jan 27 14:36:12 crc kubenswrapper[4914]: I0127 14:36:12.119541 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3737551-7641-4955-995e-4f493677dc8f-host\") pod \"crc-debug-z96rv\" (UID: \"b3737551-7641-4955-995e-4f493677dc8f\") " pod="openshift-must-gather-p5z5z/crc-debug-z96rv" Jan 27 14:36:12 crc kubenswrapper[4914]: I0127 14:36:12.139078 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnfrj\" (UniqueName: \"kubernetes.io/projected/b3737551-7641-4955-995e-4f493677dc8f-kube-api-access-hnfrj\") pod \"crc-debug-z96rv\" (UID: \"b3737551-7641-4955-995e-4f493677dc8f\") " pod="openshift-must-gather-p5z5z/crc-debug-z96rv" Jan 27 14:36:12 crc kubenswrapper[4914]: I0127 14:36:12.196640 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p5z5z/crc-debug-z96rv" Jan 27 14:36:12 crc kubenswrapper[4914]: I0127 14:36:12.521048 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p5z5z/crc-debug-z96rv" event={"ID":"b3737551-7641-4955-995e-4f493677dc8f","Type":"ContainerStarted","Data":"c64b0907e9384f878e7494a57dccf052baefacb684fc3a9ab8c4a80a784117a1"} Jan 27 14:36:19 crc kubenswrapper[4914]: I0127 14:36:19.307794 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:36:19 crc kubenswrapper[4914]: E0127 14:36:19.308576 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:36:23 crc kubenswrapper[4914]: I0127 14:36:23.622299 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p5z5z/crc-debug-z96rv" event={"ID":"b3737551-7641-4955-995e-4f493677dc8f","Type":"ContainerStarted","Data":"d3ef800887fd60cb8c2bfe5eeb5f7669a4380ead15bc62bd7b89a682269267fb"} Jan 27 14:36:32 crc kubenswrapper[4914]: I0127 14:36:32.302259 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:36:32 crc kubenswrapper[4914]: E0127 14:36:32.303015 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:36:45 crc kubenswrapper[4914]: I0127 14:36:45.805042 4914 generic.go:334] "Generic (PLEG): container finished" podID="b3737551-7641-4955-995e-4f493677dc8f" containerID="d3ef800887fd60cb8c2bfe5eeb5f7669a4380ead15bc62bd7b89a682269267fb" exitCode=0 Jan 27 14:36:45 crc kubenswrapper[4914]: I0127 14:36:45.805138 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p5z5z/crc-debug-z96rv" event={"ID":"b3737551-7641-4955-995e-4f493677dc8f","Type":"ContainerDied","Data":"d3ef800887fd60cb8c2bfe5eeb5f7669a4380ead15bc62bd7b89a682269267fb"} Jan 27 14:36:46 crc kubenswrapper[4914]: I0127 14:36:46.295563 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:36:46 crc kubenswrapper[4914]: E0127 14:36:46.295796 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:36:46 crc kubenswrapper[4914]: I0127 14:36:46.925526 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p5z5z/crc-debug-z96rv" Jan 27 14:36:46 crc kubenswrapper[4914]: I0127 14:36:46.960094 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p5z5z/crc-debug-z96rv"] Jan 27 14:36:46 crc kubenswrapper[4914]: I0127 14:36:46.970662 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p5z5z/crc-debug-z96rv"] Jan 27 14:36:47 crc kubenswrapper[4914]: I0127 14:36:47.087972 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3737551-7641-4955-995e-4f493677dc8f-host\") pod \"b3737551-7641-4955-995e-4f493677dc8f\" (UID: \"b3737551-7641-4955-995e-4f493677dc8f\") " Jan 27 14:36:47 crc kubenswrapper[4914]: I0127 14:36:47.088032 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnfrj\" (UniqueName: \"kubernetes.io/projected/b3737551-7641-4955-995e-4f493677dc8f-kube-api-access-hnfrj\") pod \"b3737551-7641-4955-995e-4f493677dc8f\" (UID: \"b3737551-7641-4955-995e-4f493677dc8f\") " Jan 27 14:36:47 crc kubenswrapper[4914]: I0127 14:36:47.089137 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3737551-7641-4955-995e-4f493677dc8f-host" (OuterVolumeSpecName: "host") pod "b3737551-7641-4955-995e-4f493677dc8f" (UID: "b3737551-7641-4955-995e-4f493677dc8f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:36:47 crc kubenswrapper[4914]: I0127 14:36:47.107085 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3737551-7641-4955-995e-4f493677dc8f-kube-api-access-hnfrj" (OuterVolumeSpecName: "kube-api-access-hnfrj") pod "b3737551-7641-4955-995e-4f493677dc8f" (UID: "b3737551-7641-4955-995e-4f493677dc8f"). InnerVolumeSpecName "kube-api-access-hnfrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:47 crc kubenswrapper[4914]: I0127 14:36:47.191223 4914 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3737551-7641-4955-995e-4f493677dc8f-host\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:47 crc kubenswrapper[4914]: I0127 14:36:47.191271 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnfrj\" (UniqueName: \"kubernetes.io/projected/b3737551-7641-4955-995e-4f493677dc8f-kube-api-access-hnfrj\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:47 crc kubenswrapper[4914]: I0127 14:36:47.820953 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c64b0907e9384f878e7494a57dccf052baefacb684fc3a9ab8c4a80a784117a1" Jan 27 14:36:47 crc kubenswrapper[4914]: I0127 14:36:47.821026 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p5z5z/crc-debug-z96rv" Jan 27 14:36:48 crc kubenswrapper[4914]: I0127 14:36:48.305761 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3737551-7641-4955-995e-4f493677dc8f" path="/var/lib/kubelet/pods/b3737551-7641-4955-995e-4f493677dc8f/volumes" Jan 27 14:36:48 crc kubenswrapper[4914]: I0127 14:36:48.350781 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p5z5z/crc-debug-99rq4"] Jan 27 14:36:48 crc kubenswrapper[4914]: E0127 14:36:48.351327 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3737551-7641-4955-995e-4f493677dc8f" containerName="container-00" Jan 27 14:36:48 crc kubenswrapper[4914]: I0127 14:36:48.351355 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3737551-7641-4955-995e-4f493677dc8f" containerName="container-00" Jan 27 14:36:48 crc kubenswrapper[4914]: I0127 14:36:48.351609 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3737551-7641-4955-995e-4f493677dc8f" containerName="container-00" Jan 27 14:36:48 crc kubenswrapper[4914]: I0127 14:36:48.352344 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p5z5z/crc-debug-99rq4" Jan 27 14:36:48 crc kubenswrapper[4914]: I0127 14:36:48.514576 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wf7p\" (UniqueName: \"kubernetes.io/projected/409ce055-91d8-49e9-aa80-63931fd1df77-kube-api-access-7wf7p\") pod \"crc-debug-99rq4\" (UID: \"409ce055-91d8-49e9-aa80-63931fd1df77\") " pod="openshift-must-gather-p5z5z/crc-debug-99rq4" Jan 27 14:36:48 crc kubenswrapper[4914]: I0127 14:36:48.515516 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/409ce055-91d8-49e9-aa80-63931fd1df77-host\") pod \"crc-debug-99rq4\" (UID: \"409ce055-91d8-49e9-aa80-63931fd1df77\") " pod="openshift-must-gather-p5z5z/crc-debug-99rq4" Jan 27 14:36:48 crc kubenswrapper[4914]: I0127 14:36:48.616617 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/409ce055-91d8-49e9-aa80-63931fd1df77-host\") pod \"crc-debug-99rq4\" (UID: \"409ce055-91d8-49e9-aa80-63931fd1df77\") " pod="openshift-must-gather-p5z5z/crc-debug-99rq4" Jan 27 14:36:48 crc kubenswrapper[4914]: I0127 14:36:48.616706 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wf7p\" (UniqueName: \"kubernetes.io/projected/409ce055-91d8-49e9-aa80-63931fd1df77-kube-api-access-7wf7p\") pod \"crc-debug-99rq4\" (UID: \"409ce055-91d8-49e9-aa80-63931fd1df77\") " pod="openshift-must-gather-p5z5z/crc-debug-99rq4" Jan 27 14:36:48 crc kubenswrapper[4914]: I0127 14:36:48.616750 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/409ce055-91d8-49e9-aa80-63931fd1df77-host\") pod \"crc-debug-99rq4\" (UID: \"409ce055-91d8-49e9-aa80-63931fd1df77\") " pod="openshift-must-gather-p5z5z/crc-debug-99rq4" Jan 27 14:36:48 crc kubenswrapper[4914]: I0127 14:36:48.641678 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wf7p\" (UniqueName: \"kubernetes.io/projected/409ce055-91d8-49e9-aa80-63931fd1df77-kube-api-access-7wf7p\") pod \"crc-debug-99rq4\" (UID: \"409ce055-91d8-49e9-aa80-63931fd1df77\") " pod="openshift-must-gather-p5z5z/crc-debug-99rq4" Jan 27 14:36:48 crc kubenswrapper[4914]: I0127 14:36:48.666374 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p5z5z/crc-debug-99rq4" Jan 27 14:36:48 crc kubenswrapper[4914]: I0127 14:36:48.830195 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p5z5z/crc-debug-99rq4" event={"ID":"409ce055-91d8-49e9-aa80-63931fd1df77","Type":"ContainerStarted","Data":"49a587b5cf1dfac15c2e66d61c52632851a24803754794519411d8b1391e1deb"} Jan 27 14:36:49 crc kubenswrapper[4914]: I0127 14:36:49.839407 4914 generic.go:334] "Generic (PLEG): container finished" podID="409ce055-91d8-49e9-aa80-63931fd1df77" containerID="ef31fb82cc1da4091f946b5db0078f9513844940ba453af0044bb76677abe44e" exitCode=1 Jan 27 14:36:49 crc kubenswrapper[4914]: I0127 14:36:49.839449 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p5z5z/crc-debug-99rq4" event={"ID":"409ce055-91d8-49e9-aa80-63931fd1df77","Type":"ContainerDied","Data":"ef31fb82cc1da4091f946b5db0078f9513844940ba453af0044bb76677abe44e"} Jan 27 14:36:49 crc kubenswrapper[4914]: I0127 14:36:49.874279 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p5z5z/crc-debug-99rq4"] Jan 27 14:36:49 crc kubenswrapper[4914]: I0127 14:36:49.883084 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p5z5z/crc-debug-99rq4"] Jan 27 14:36:50 crc kubenswrapper[4914]: I0127 14:36:50.976915 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p5z5z/crc-debug-99rq4" Jan 27 14:36:51 crc kubenswrapper[4914]: I0127 14:36:51.065631 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wf7p\" (UniqueName: \"kubernetes.io/projected/409ce055-91d8-49e9-aa80-63931fd1df77-kube-api-access-7wf7p\") pod \"409ce055-91d8-49e9-aa80-63931fd1df77\" (UID: \"409ce055-91d8-49e9-aa80-63931fd1df77\") " Jan 27 14:36:51 crc kubenswrapper[4914]: I0127 14:36:51.066053 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/409ce055-91d8-49e9-aa80-63931fd1df77-host\") pod \"409ce055-91d8-49e9-aa80-63931fd1df77\" (UID: \"409ce055-91d8-49e9-aa80-63931fd1df77\") " Jan 27 14:36:51 crc kubenswrapper[4914]: I0127 14:36:51.066117 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409ce055-91d8-49e9-aa80-63931fd1df77-host" (OuterVolumeSpecName: "host") pod "409ce055-91d8-49e9-aa80-63931fd1df77" (UID: "409ce055-91d8-49e9-aa80-63931fd1df77"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:36:51 crc kubenswrapper[4914]: I0127 14:36:51.066570 4914 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/409ce055-91d8-49e9-aa80-63931fd1df77-host\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:51 crc kubenswrapper[4914]: I0127 14:36:51.071844 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/409ce055-91d8-49e9-aa80-63931fd1df77-kube-api-access-7wf7p" (OuterVolumeSpecName: "kube-api-access-7wf7p") pod "409ce055-91d8-49e9-aa80-63931fd1df77" (UID: "409ce055-91d8-49e9-aa80-63931fd1df77"). InnerVolumeSpecName "kube-api-access-7wf7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:51 crc kubenswrapper[4914]: I0127 14:36:51.168160 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wf7p\" (UniqueName: \"kubernetes.io/projected/409ce055-91d8-49e9-aa80-63931fd1df77-kube-api-access-7wf7p\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:51 crc kubenswrapper[4914]: I0127 14:36:51.857857 4914 scope.go:117] "RemoveContainer" containerID="ef31fb82cc1da4091f946b5db0078f9513844940ba453af0044bb76677abe44e" Jan 27 14:36:51 crc kubenswrapper[4914]: I0127 14:36:51.857880 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p5z5z/crc-debug-99rq4" Jan 27 14:36:52 crc kubenswrapper[4914]: I0127 14:36:52.306780 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="409ce055-91d8-49e9-aa80-63931fd1df77" path="/var/lib/kubelet/pods/409ce055-91d8-49e9-aa80-63931fd1df77/volumes" Jan 27 14:36:58 crc kubenswrapper[4914]: I0127 14:36:58.295233 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:36:58 crc kubenswrapper[4914]: E0127 14:36:58.296116 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:37:09 crc kubenswrapper[4914]: I0127 14:37:09.295067 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:37:09 crc kubenswrapper[4914]: E0127 14:37:09.295691 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:37:21 crc kubenswrapper[4914]: I0127 14:37:21.294753 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:37:21 crc kubenswrapper[4914]: E0127 14:37:21.295489 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:37:24 crc kubenswrapper[4914]: I0127 14:37:24.825376 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c9f4b5684-nv57j_8cb7de31-9af1-452a-a335-c1ebf2876522/barbican-api/0.log" Jan 27 14:37:25 crc kubenswrapper[4914]: I0127 14:37:25.082202 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c9f4b5684-nv57j_8cb7de31-9af1-452a-a335-c1ebf2876522/barbican-api-log/0.log" Jan 27 14:37:25 crc kubenswrapper[4914]: I0127 14:37:25.178250 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-69bc9ddd86-ns2qc_66b37bce-7727-4a01-b0e5-c4df82590c96/barbican-keystone-listener/0.log" Jan 27 14:37:25 crc kubenswrapper[4914]: I0127 14:37:25.187077 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-69bc9ddd86-ns2qc_66b37bce-7727-4a01-b0e5-c4df82590c96/barbican-keystone-listener-log/0.log" Jan 27 14:37:25 crc kubenswrapper[4914]: I0127 14:37:25.355138 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-879c45d7-8hbrg_c2813942-0a24-4127-ab18-2b5031826e2c/barbican-worker/0.log" Jan 27 14:37:25 crc kubenswrapper[4914]: I0127 14:37:25.391094 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-879c45d7-8hbrg_c2813942-0a24-4127-ab18-2b5031826e2c/barbican-worker-log/0.log" Jan 27 14:37:25 crc kubenswrapper[4914]: I0127 14:37:25.622882 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed0f4049-d67b-4534-a821-8cbefb969a63/ceilometer-central-agent/0.log" Jan 27 14:37:25 crc kubenswrapper[4914]: I0127 14:37:25.624770 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-7jkvc_77a76dd2-a27e-4755-881f-3472edf77cd6/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 14:37:25 crc kubenswrapper[4914]: I0127 14:37:25.713315 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed0f4049-d67b-4534-a821-8cbefb969a63/ceilometer-notification-agent/0.log" Jan 27 14:37:25 crc kubenswrapper[4914]: I0127 14:37:25.834029 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed0f4049-d67b-4534-a821-8cbefb969a63/proxy-httpd/0.log" Jan 27 14:37:25 crc kubenswrapper[4914]: I0127 14:37:25.836140 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed0f4049-d67b-4534-a821-8cbefb969a63/sg-core/0.log" Jan 27 14:37:26 crc kubenswrapper[4914]: I0127 14:37:26.136425 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6eb8743b-d452-400b-b2ef-818c074597e6/cinder-api/0.log" Jan 27 14:37:26 crc kubenswrapper[4914]: I0127 14:37:26.206848 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6eb8743b-d452-400b-b2ef-818c074597e6/cinder-api-log/0.log" Jan 27 14:37:26 crc kubenswrapper[4914]: I0127 14:37:26.322302 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_fba39866-8924-4253-8bc6-e4c85fc9de31/cinder-scheduler/0.log" Jan 27 14:37:26 crc kubenswrapper[4914]: I0127 14:37:26.400010 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_fba39866-8924-4253-8bc6-e4c85fc9de31/probe/0.log" Jan 27 14:37:26 crc kubenswrapper[4914]: I0127 14:37:26.526608 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-pl4x6_9ef8835d-5ed1-428f-899f-45c41c5ffb4e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 14:37:26 crc kubenswrapper[4914]: I0127 14:37:26.628301 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-zlkbx_4ff553dd-8799-4ed1-9f38-25e6f481907d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 14:37:26 crc kubenswrapper[4914]: I0127 14:37:26.731902 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66fc59ccbf-9wmtl_c054fe54-b82e-4f46-9f54-29de25ea1583/init/0.log" Jan 27 14:37:26 crc kubenswrapper[4914]: I0127 14:37:26.880905 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66fc59ccbf-9wmtl_c054fe54-b82e-4f46-9f54-29de25ea1583/init/0.log" Jan 27 14:37:26 crc kubenswrapper[4914]: I0127 14:37:26.936474 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66fc59ccbf-9wmtl_c054fe54-b82e-4f46-9f54-29de25ea1583/dnsmasq-dns/0.log" Jan 27 14:37:27 crc kubenswrapper[4914]: I0127 14:37:27.004547 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-lxd9h_bdc53bfd-51de-436e-837e-bfc1186f706f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 14:37:27 crc kubenswrapper[4914]: I0127 14:37:27.214069 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2330c3f1-da78-4cc1-a16a-856037a1f395/glance-httpd/0.log" Jan 27 14:37:27 crc kubenswrapper[4914]: I0127 14:37:27.251556 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2330c3f1-da78-4cc1-a16a-856037a1f395/glance-log/0.log" Jan 27 14:37:27 crc kubenswrapper[4914]: I0127 14:37:27.373948 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ff464548-5e9c-4d46-a547-7d0cdd949883/glance-httpd/0.log" Jan 27 14:37:27 crc kubenswrapper[4914]: I0127 14:37:27.451247 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ff464548-5e9c-4d46-a547-7d0cdd949883/glance-log/0.log" Jan 27 14:37:27 crc kubenswrapper[4914]: I0127 14:37:27.661074 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6bb6c77c5d-pwr6c_d7209cbb-e572-463b-bb43-9805cd58ea57/horizon/0.log" Jan 27 14:37:27 crc kubenswrapper[4914]: I0127 14:37:27.876207 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-wrh7d_ecda27a9-3b98-4f3e-9f06-5f8e46af202f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 14:37:27 crc kubenswrapper[4914]: I0127 14:37:27.997959 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-mlf59_00e18877-2928-4039-b2e4-562989a3cdb5/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 14:37:27 crc kubenswrapper[4914]: I0127 14:37:27.998323 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6bb6c77c5d-pwr6c_d7209cbb-e572-463b-bb43-9805cd58ea57/horizon-log/0.log" Jan 27 14:37:28 crc kubenswrapper[4914]: I0127 14:37:28.215504 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a67de105-d5d3-48f3-a642-ba7be3dc0920/kube-state-metrics/0.log" Jan 27 14:37:28 crc kubenswrapper[4914]: I0127 14:37:28.302959 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-794d7bcbcd-drzqz_208bd7a9-58df-4bd1-8eac-ddcb45417fb8/keystone-api/0.log" Jan 27 14:37:28 crc kubenswrapper[4914]: I0127 14:37:28.493778 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-2tr9x_900ff3a0-360d-4537-8bad-c2667319bb93/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 14:37:28 crc kubenswrapper[4914]: I0127 14:37:28.790767 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6669b7ffb9-n8php_1b9b723f-e648-4f12-86f7-d453e000a46e/neutron-httpd/0.log" Jan 27 14:37:28 crc kubenswrapper[4914]: I0127 14:37:28.801959 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6669b7ffb9-n8php_1b9b723f-e648-4f12-86f7-d453e000a46e/neutron-api/0.log" Jan 27 14:37:29 crc kubenswrapper[4914]: I0127 14:37:29.077722 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fl79h_8b57afc2-5e5e-4268-ac55-f6237fb3f284/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 14:37:29 crc kubenswrapper[4914]: I0127 14:37:29.468184 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a546639f-94d3-43dc-8591-a6444b2a2150/nova-api-log/0.log" Jan 27 14:37:29 crc kubenswrapper[4914]: I0127 14:37:29.493711 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_2a7f2bb6-0714-482b-91e6-50fea1ab85e2/nova-cell0-conductor-conductor/0.log" Jan 27 14:37:29 crc kubenswrapper[4914]: I0127 14:37:29.572332 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a546639f-94d3-43dc-8591-a6444b2a2150/nova-api-api/0.log" Jan 27 14:37:29 crc kubenswrapper[4914]: I0127 14:37:29.777185 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3b4c18be-72eb-456f-9a55-eafc2cb451d0/nova-cell1-conductor-conductor/0.log" Jan 27 14:37:29 crc kubenswrapper[4914]: I0127 14:37:29.857079 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_986f8538-35e7-4c21-9bed-b79999a106f0/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 14:37:30 crc kubenswrapper[4914]: I0127 14:37:30.182887 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3/nova-metadata-log/0.log" Jan 27 14:37:30 crc kubenswrapper[4914]: I0127 14:37:30.188572 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-qbcgd_2f1076b4-75fa-4313-b8f7-7071b8da40d2/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 14:37:30 crc kubenswrapper[4914]: I0127 14:37:30.530365 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b5d3c192-12c7-44a2-8100-5307d6a9bb9d/nova-scheduler-scheduler/0.log" Jan 27 14:37:30 crc kubenswrapper[4914]: I0127 14:37:30.596501 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_541d0024-7ae1-4b5a-b139-35fe77463191/mysql-bootstrap/0.log" Jan 27 14:37:30 crc kubenswrapper[4914]: I0127 14:37:30.812539 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_541d0024-7ae1-4b5a-b139-35fe77463191/mysql-bootstrap/0.log" Jan 27 14:37:30 crc kubenswrapper[4914]: I0127 14:37:30.865957 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_541d0024-7ae1-4b5a-b139-35fe77463191/galera/0.log" Jan 27 14:37:31 crc kubenswrapper[4914]: I0127 14:37:31.024321 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fd387894-ddd7-4982-b8be-bb8bcea88486/mysql-bootstrap/0.log" Jan 27 14:37:31 crc kubenswrapper[4914]: I0127 14:37:31.173296 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fd387894-ddd7-4982-b8be-bb8bcea88486/mysql-bootstrap/0.log" Jan 27 14:37:31 crc kubenswrapper[4914]: I0127 14:37:31.299882 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fd387894-ddd7-4982-b8be-bb8bcea88486/galera/0.log" Jan 27 14:37:31 crc kubenswrapper[4914]: I0127 14:37:31.312479 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_45eaeef7-d320-4cb2-9d21-ff2c07fc5fa3/nova-metadata-metadata/0.log" Jan 27 14:37:31 crc kubenswrapper[4914]: I0127 14:37:31.437062 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c3ebffe9-3030-466c-adbf-83deadb5d5d0/openstackclient/0.log" Jan 27 14:37:31 crc kubenswrapper[4914]: I0127 14:37:31.492165 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-fwgsp_16d7aef1-746e-4166-a82d-e40371ebc96c/ovn-controller/0.log" Jan 27 14:37:31 crc kubenswrapper[4914]: I0127 14:37:31.674371 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4r9r5_97163637-6474-4c5a-b153-113d64e8c07f/openstack-network-exporter/0.log" Jan 27 14:37:31 crc kubenswrapper[4914]: I0127 14:37:31.793815 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-d2xdr_7668e140-246e-470b-8988-8d716fa6580b/ovsdb-server-init/0.log" Jan 27 14:37:32 crc kubenswrapper[4914]: I0127 14:37:31.999641 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-d2xdr_7668e140-246e-470b-8988-8d716fa6580b/ovs-vswitchd/0.log" Jan 27 14:37:32 crc kubenswrapper[4914]: I0127 14:37:32.034916 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-d2xdr_7668e140-246e-470b-8988-8d716fa6580b/ovsdb-server-init/0.log" Jan 27 14:37:32 crc kubenswrapper[4914]: I0127 14:37:32.070500 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-d2xdr_7668e140-246e-470b-8988-8d716fa6580b/ovsdb-server/0.log" Jan 27 14:37:32 crc kubenswrapper[4914]: I0127 14:37:32.253491 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9tpnf_efed5ce1-5507-4f6b-b9fc-0f97d5e66bc6/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 14:37:32 crc kubenswrapper[4914]: I0127 14:37:32.259925 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3824e689-7118-49e0-b61e-da16b54872ca/openstack-network-exporter/0.log" Jan 27 14:37:32 crc kubenswrapper[4914]: I0127 14:37:32.326659 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3824e689-7118-49e0-b61e-da16b54872ca/ovn-northd/0.log" Jan 27 14:37:32 crc kubenswrapper[4914]: I0127 14:37:32.494965 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c74db2f0-b0f5-420d-970a-ecebd81bff03/openstack-network-exporter/0.log" Jan 27 14:37:32 crc kubenswrapper[4914]: I0127 14:37:32.694219 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c74db2f0-b0f5-420d-970a-ecebd81bff03/ovsdbserver-nb/0.log" Jan 27 14:37:32 crc kubenswrapper[4914]: I0127 14:37:32.836220 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_fe285074-4726-42c1-99cc-d99be63c1cbc/openstack-network-exporter/0.log" Jan 27 14:37:32 crc kubenswrapper[4914]: I0127 14:37:32.854439 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_fe285074-4726-42c1-99cc-d99be63c1cbc/ovsdbserver-sb/0.log" Jan 27 14:37:33 crc kubenswrapper[4914]: I0127 14:37:33.090151 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6d88d8dfdb-6svwq_56fee704-d63c-4264-a135-38cb14dca70f/placement-log/0.log" Jan 27 14:37:33 crc kubenswrapper[4914]: I0127 14:37:33.112783 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6d88d8dfdb-6svwq_56fee704-d63c-4264-a135-38cb14dca70f/placement-api/0.log" Jan 27 14:37:33 crc kubenswrapper[4914]: I0127 14:37:33.177425 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c3655c22-46a7-4ed5-bba1-4a294940777d/setup-container/0.log" Jan 27 14:37:33 crc kubenswrapper[4914]: I0127 14:37:33.294292 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:37:33 crc kubenswrapper[4914]: E0127 14:37:33.294488 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:37:33 crc kubenswrapper[4914]: I0127 14:37:33.482423 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c3655c22-46a7-4ed5-bba1-4a294940777d/setup-container/0.log" Jan 27 14:37:33 crc kubenswrapper[4914]: I0127 14:37:33.488793 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fa420d84-09ad-44c4-9af0-fddfcab7501c/setup-container/0.log" Jan 27 14:37:33 crc kubenswrapper[4914]: I0127 14:37:33.523545 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c3655c22-46a7-4ed5-bba1-4a294940777d/rabbitmq/0.log" Jan 27 14:37:33 crc kubenswrapper[4914]: I0127 14:37:33.741728 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fa420d84-09ad-44c4-9af0-fddfcab7501c/setup-container/0.log" Jan 27 14:37:33 crc kubenswrapper[4914]: I0127 14:37:33.772053 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-6mzj7_62c60dfe-c65f-4b26-a21a-a9ace0cc93ee/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 14:37:33 crc kubenswrapper[4914]: I0127 14:37:33.776370 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fa420d84-09ad-44c4-9af0-fddfcab7501c/rabbitmq/0.log" Jan 27 14:37:33 crc kubenswrapper[4914]: I0127 14:37:33.971492 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-rtxs8_4498dcaf-b92d-48e5-9c54-8678b3d36f1b/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 14:37:34 crc kubenswrapper[4914]: I0127 14:37:34.048598 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-57hbv_c2ce37bb-8f3e-42cf-8c80-fa1c3496354d/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 14:37:34 crc kubenswrapper[4914]: I0127 14:37:34.266132 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-r2q65_c529c1e6-5832-42ef-aef0-a67bb6828236/ssh-known-hosts-edpm-deployment/0.log" Jan 27 14:37:34 crc kubenswrapper[4914]: I0127 14:37:34.292041 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-cm6nm_16f74bcf-a553-4fb7-9bbf-be7a617ccbc4/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 14:37:34 crc kubenswrapper[4914]: I0127 14:37:34.521373 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5fb5dcf6b9-q5gfl_7a7d0c59-fb20-4508-bfe5-5e91e2f28394/proxy-server/0.log" Jan 27 14:37:34 crc kubenswrapper[4914]: I0127 14:37:34.561185 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5fb5dcf6b9-q5gfl_7a7d0c59-fb20-4508-bfe5-5e91e2f28394/proxy-httpd/0.log" Jan 27 14:37:34 crc kubenswrapper[4914]: I0127 14:37:34.690731 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-4f8dg_0d441d11-3241-45da-8bcf-c95636d3efa9/swift-ring-rebalance/0.log" Jan 27 14:37:34 crc kubenswrapper[4914]: I0127 14:37:34.792673 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62cc5d9e-afad-4888-9e8f-c57f7b185d2b/account-auditor/0.log" Jan 27 14:37:34 crc kubenswrapper[4914]: I0127 14:37:34.861456 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62cc5d9e-afad-4888-9e8f-c57f7b185d2b/account-reaper/0.log" Jan 27 14:37:34 crc kubenswrapper[4914]: I0127 14:37:34.940268 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62cc5d9e-afad-4888-9e8f-c57f7b185d2b/account-replicator/0.log" Jan 27 14:37:35 crc kubenswrapper[4914]: I0127 14:37:35.023350 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62cc5d9e-afad-4888-9e8f-c57f7b185d2b/container-auditor/0.log" Jan 27 14:37:35 crc kubenswrapper[4914]: I0127 14:37:35.050915 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62cc5d9e-afad-4888-9e8f-c57f7b185d2b/account-server/0.log" Jan 27 14:37:35 crc kubenswrapper[4914]: I0127 14:37:35.121445 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62cc5d9e-afad-4888-9e8f-c57f7b185d2b/container-replicator/0.log" Jan 27 14:37:35 crc kubenswrapper[4914]: I0127 14:37:35.192552 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62cc5d9e-afad-4888-9e8f-c57f7b185d2b/container-server/0.log" Jan 27 14:37:35 crc kubenswrapper[4914]: I0127 14:37:35.219030 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62cc5d9e-afad-4888-9e8f-c57f7b185d2b/container-updater/0.log" Jan 27 14:37:35 crc kubenswrapper[4914]: I0127 14:37:35.257222 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62cc5d9e-afad-4888-9e8f-c57f7b185d2b/object-auditor/0.log" Jan 27 14:37:35 crc kubenswrapper[4914]: I0127 14:37:35.335361 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62cc5d9e-afad-4888-9e8f-c57f7b185d2b/object-expirer/0.log" Jan 27 14:37:35 crc kubenswrapper[4914]: I0127 14:37:35.419485 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62cc5d9e-afad-4888-9e8f-c57f7b185d2b/object-replicator/0.log" Jan 27 14:37:35 crc kubenswrapper[4914]: I0127 14:37:35.463602 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62cc5d9e-afad-4888-9e8f-c57f7b185d2b/object-server/0.log" Jan 27 14:37:35 crc kubenswrapper[4914]: I0127 14:37:35.482328 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62cc5d9e-afad-4888-9e8f-c57f7b185d2b/object-updater/0.log" Jan 27 14:37:35 crc kubenswrapper[4914]: I0127 14:37:35.582767 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62cc5d9e-afad-4888-9e8f-c57f7b185d2b/rsync/0.log" Jan 27 14:37:35 crc kubenswrapper[4914]: I0127 14:37:35.719417 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-7dhzj_82a18d6e-9d92-402d-a885-8176065c4c66/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 14:37:35 crc kubenswrapper[4914]: I0127 14:37:35.721204 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62cc5d9e-afad-4888-9e8f-c57f7b185d2b/swift-recon-cron/0.log" Jan 27 14:37:35 crc kubenswrapper[4914]: I0127 14:37:35.952434 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_ba57797f-afb4-4ef6-9f35-3ca8e7b6bb3d/test-operator-logs-container/0.log" Jan 27 14:37:36 crc kubenswrapper[4914]: I0127 14:37:36.297264 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7dpcb_e6fbc5c0-ec11-4a75-9bdb-91e5529df7a9/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 14:37:36 crc kubenswrapper[4914]: I0127 14:37:36.586892 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_b94746b2-3335-4991-952f-fe8ec53a24b8/tempest-tests-tempest-tests-runner/0.log" Jan 27 14:37:40 crc kubenswrapper[4914]: I0127 14:37:40.695707 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ce960bf5-10e9-4b71-a092-a5b4013adbdf/memcached/0.log" Jan 27 14:37:48 crc kubenswrapper[4914]: I0127 14:37:48.294841 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:37:48 crc kubenswrapper[4914]: E0127 14:37:48.295559 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:38:01 crc kubenswrapper[4914]: I0127 14:38:01.378227 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng_29f90886-e5f8-4c3a-8bff-1eea749b8e34/util/0.log" Jan 27 14:38:01 crc kubenswrapper[4914]: I0127 14:38:01.553196 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng_29f90886-e5f8-4c3a-8bff-1eea749b8e34/pull/0.log" Jan 27 14:38:01 crc kubenswrapper[4914]: I0127 14:38:01.592978 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng_29f90886-e5f8-4c3a-8bff-1eea749b8e34/util/0.log" Jan 27 14:38:01 crc kubenswrapper[4914]: I0127 14:38:01.602335 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng_29f90886-e5f8-4c3a-8bff-1eea749b8e34/pull/0.log" Jan 27 14:38:01 crc kubenswrapper[4914]: I0127 14:38:01.768575 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng_29f90886-e5f8-4c3a-8bff-1eea749b8e34/extract/0.log" Jan 27 14:38:01 crc kubenswrapper[4914]: I0127 14:38:01.775851 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng_29f90886-e5f8-4c3a-8bff-1eea749b8e34/pull/0.log" Jan 27 14:38:01 crc kubenswrapper[4914]: I0127 14:38:01.781628 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cf784ng_29f90886-e5f8-4c3a-8bff-1eea749b8e34/util/0.log" Jan 27 14:38:02 crc kubenswrapper[4914]: I0127 14:38:02.050746 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5fdc687f5-z4dmz_038a0f9d-802c-4615-bd9f-82f843988bcb/manager/0.log" Jan 27 14:38:02 crc kubenswrapper[4914]: I0127 14:38:02.201854 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-76d4d5b8f9-2zr2g_8bd09249-97f3-4b92-a829-c6f70919052a/manager/0.log" Jan 27 14:38:02 crc kubenswrapper[4914]: I0127 14:38:02.462291 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84d5bb46b-624jw_63b541a1-cc9f-41ea-8da8-c219f9fff59b/manager/0.log" Jan 27 14:38:02 crc kubenswrapper[4914]: I0127 14:38:02.549690 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-dnzgt_6f5f5515-e498-41a2-8433-5ccec9325ff0/manager/0.log" Jan 27 14:38:02 crc kubenswrapper[4914]: I0127 14:38:02.682106 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-tdhdr_d32a4b7b-f918-44bb-86a2-95d862a35727/manager/0.log" Jan 27 14:38:03 crc kubenswrapper[4914]: I0127 14:38:03.077123 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-58865f87b4-ktfq5_96867157-752b-449f-b3ee-c0b428e0dbb1/manager/0.log" Jan 27 14:38:03 crc kubenswrapper[4914]: I0127 14:38:03.262872 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-fv4pk_81865888-d857-481d-bcd5-b5e9e17d4b7d/manager/0.log" Jan 27 14:38:03 crc kubenswrapper[4914]: I0127 14:38:03.295034 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:38:03 crc kubenswrapper[4914]: E0127 14:38:03.295250 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:38:03 crc kubenswrapper[4914]: I0127 14:38:03.416599 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-78f8b7b89c-56q9z_ed141eef-7122-41b3-9798-e74d82785c1d/manager/0.log" Jan 27 14:38:03 crc kubenswrapper[4914]: I0127 14:38:03.472491 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78b8f8fd84-64vmw_ba1681ef-9c83-419c-bb76-f52cc3e28273/manager/0.log" Jan 27 14:38:03 crc kubenswrapper[4914]: I0127 14:38:03.681867 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-5d4x9_f5f15078-52a7-47ed-96dd-c831a33562cc/manager/0.log" Jan 27 14:38:03 crc kubenswrapper[4914]: I0127 14:38:03.826203 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-569695f6c5-g26rc_718cf203-74e7-4ab4-9e10-2161163946b6/manager/0.log" Jan 27 14:38:04 crc kubenswrapper[4914]: I0127 14:38:04.135992 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74ffd97575-94b6l_514e105e-95e2-424b-b003-eb5967594784/manager/0.log" Jan 27 14:38:04 crc kubenswrapper[4914]: I0127 14:38:04.442649 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7bd95ffd6d8j878_2fef03f4-218b-4d6b-b9ca-c303c7c7b002/manager/0.log" Jan 27 14:38:04 crc kubenswrapper[4914]: I0127 14:38:04.894472 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6bfcf7b875-vf68w_81257126-8f49-4586-9772-3f22b3e82782/operator/0.log" Jan 27 14:38:05 crc kubenswrapper[4914]: I0127 14:38:05.318708 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nqpn7_fac137ad-f24e-4020-a9d4-118fd8cf2dd2/registry-server/0.log" Jan 27 14:38:05 crc kubenswrapper[4914]: I0127 14:38:05.834262 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-qv5qs_27a3b302-137f-4c5e-a867-8ad8de53db37/manager/0.log" Jan 27 14:38:05 crc kubenswrapper[4914]: I0127 14:38:05.902981 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bf4858b78-kjdk7_e843177c-8972-4f13-8b45-0d9d229ee1a0/manager/0.log" Jan 27 14:38:06 crc kubenswrapper[4914]: I0127 14:38:06.082761 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-7748d79f84-s5pfn_fc2d6379-e123-4273-8f4a-d36c01030a01/manager/0.log" Jan 27 14:38:06 crc kubenswrapper[4914]: I0127 14:38:06.136313 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vljnz_25c3827b-f4ed-432a-b7cb-928c4b315176/operator/0.log" Jan 27 14:38:06 crc kubenswrapper[4914]: I0127 14:38:06.410659 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-65596dbf77-9rwcq_d2699df4-2885-4ccc-ae67-5fddc1d1a385/manager/0.log" Jan 27 14:38:06 crc kubenswrapper[4914]: I0127 14:38:06.566825 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7db57dc8bf-sk6rz_f478cd9a-acc7-4da7-9c4f-e089f3bdd465/manager/0.log" Jan 27 14:38:06 crc kubenswrapper[4914]: I0127 14:38:06.659958 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-kflq5_01d1f0ed-818c-4a9d-9635-8aa7dea1cfa2/manager/0.log" Jan 27 14:38:06 crc kubenswrapper[4914]: I0127 14:38:06.882162 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6476466c7c-rsnw7_1c94fefd-3fb7-4730-9386-1499a83c60c6/manager/0.log" Jan 27 14:38:07 crc kubenswrapper[4914]: I0127 14:38:07.005141 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-76958f4d87-nc9zh_dda335ba-11df-4fb7-86ba-4b2a0c8dbdaf/manager/0.log" Jan 27 14:38:08 crc kubenswrapper[4914]: I0127 14:38:08.835749 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75b8f798ff-wq8mf_0093b4bf-5086-4bae-adbb-1e18935cc19a/manager/0.log" Jan 27 14:38:18 crc kubenswrapper[4914]: I0127 14:38:18.295372 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:38:18 crc kubenswrapper[4914]: E0127 14:38:18.296105 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:38:25 crc kubenswrapper[4914]: I0127 14:38:25.615697 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mlnd8_628bfff0-2254-4c7c-a8a4-01b2288d8535/control-plane-machine-set-operator/0.log" Jan 27 14:38:25 crc kubenswrapper[4914]: I0127 14:38:25.795462 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z74jx_0074c027-d7a9-4958-81dc-65a378eb8910/kube-rbac-proxy/0.log" Jan 27 14:38:25 crc kubenswrapper[4914]: I0127 14:38:25.818416 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z74jx_0074c027-d7a9-4958-81dc-65a378eb8910/machine-api-operator/0.log" Jan 27 14:38:29 crc kubenswrapper[4914]: I0127 14:38:29.294557 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:38:29 crc kubenswrapper[4914]: E0127 14:38:29.295644 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:38:38 crc kubenswrapper[4914]: I0127 14:38:38.117909 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-pszcq_f1f5b8cb-4004-4bab-89c7-f730f69d8ca2/cert-manager-controller/0.log" Jan 27 14:38:38 crc kubenswrapper[4914]: I0127 14:38:38.389085 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jcblw_d3dde8fe-073a-4757-ae55-141e026db3ba/cert-manager-cainjector/0.log" Jan 27 14:38:38 crc kubenswrapper[4914]: I0127 14:38:38.421776 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-9tnxd_3f8a8e0d-b50d-48d2-b899-a22eef5a2253/cert-manager-webhook/0.log" Jan 27 14:38:42 crc kubenswrapper[4914]: I0127 14:38:42.302029 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:38:42 crc kubenswrapper[4914]: E0127 14:38:42.302726 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:38:51 crc kubenswrapper[4914]: I0127 14:38:51.851062 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-zmn2w_32db9757-0171-406d-807a-103144e273ac/nmstate-console-plugin/0.log" Jan 27 14:38:51 crc kubenswrapper[4914]: I0127 14:38:51.934523 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sknr4_263a8102-46cc-45f5-b0b9-9d20f072147d/nmstate-handler/0.log" Jan 27 14:38:52 crc kubenswrapper[4914]: I0127 14:38:52.047923 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-5ltgd_497bf40d-5a3c-48a5-90b5-e2bc0566f520/kube-rbac-proxy/0.log" Jan 27 14:38:52 crc kubenswrapper[4914]: I0127 14:38:52.090650 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-5ltgd_497bf40d-5a3c-48a5-90b5-e2bc0566f520/nmstate-metrics/0.log" Jan 27 14:38:52 crc kubenswrapper[4914]: I0127 14:38:52.212107 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-g7ck7_e3d282b5-2dc5-4c0b-9a8f-aace2048b049/nmstate-operator/0.log" Jan 27 14:38:52 crc kubenswrapper[4914]: I0127 14:38:52.304791 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-7wff4_cc90236f-8757-4fbb-89a8-b79c69e688e3/nmstate-webhook/0.log" Jan 27 14:38:56 crc kubenswrapper[4914]: I0127 14:38:56.295364 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:38:56 crc kubenswrapper[4914]: E0127 14:38:56.296381 4914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qhdfz_openshift-machine-config-operator(bdf2dcff-9caa-45ba-98a8-0a00861bd11a)\"" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" Jan 27 14:39:09 crc kubenswrapper[4914]: I0127 14:39:09.294663 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:39:10 crc kubenswrapper[4914]: I0127 14:39:10.042518 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerStarted","Data":"40837c9b1c5142e5ba99883dc8307f8fc3c06f7346b9663b379e8a06df6b926f"} Jan 27 14:39:19 crc kubenswrapper[4914]: I0127 14:39:19.354698 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-c2g6z_bafcf3de-a99a-4d2a-8ed0-55411eea67d0/kube-rbac-proxy/0.log" Jan 27 14:39:19 crc kubenswrapper[4914]: I0127 14:39:19.439629 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-c2g6z_bafcf3de-a99a-4d2a-8ed0-55411eea67d0/controller/0.log" Jan 27 14:39:19 crc kubenswrapper[4914]: I0127 14:39:19.573240 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/cp-frr-files/0.log" Jan 27 14:39:19 crc kubenswrapper[4914]: I0127 14:39:19.746162 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/cp-reloader/0.log" Jan 27 14:39:19 crc kubenswrapper[4914]: I0127 14:39:19.747501 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/cp-frr-files/0.log" Jan 27 14:39:19 crc kubenswrapper[4914]: I0127 14:39:19.776273 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/cp-metrics/0.log" Jan 27 14:39:19 crc kubenswrapper[4914]: I0127 14:39:19.806428 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/cp-reloader/0.log" Jan 27 14:39:19 crc kubenswrapper[4914]: I0127 14:39:19.947677 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/cp-frr-files/0.log" Jan 27 14:39:19 crc kubenswrapper[4914]: I0127 14:39:19.961576 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/cp-reloader/0.log" Jan 27 14:39:20 crc kubenswrapper[4914]: I0127 14:39:19.999805 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/cp-metrics/0.log" Jan 27 14:39:20 crc kubenswrapper[4914]: I0127 14:39:20.014404 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/cp-metrics/0.log" Jan 27 14:39:20 crc kubenswrapper[4914]: I0127 14:39:20.173377 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/cp-frr-files/0.log" Jan 27 14:39:20 crc kubenswrapper[4914]: I0127 14:39:20.182537 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/cp-metrics/0.log" Jan 27 14:39:20 crc kubenswrapper[4914]: I0127 14:39:20.212676 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/controller/0.log" Jan 27 14:39:20 crc kubenswrapper[4914]: I0127 14:39:20.222185 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/cp-reloader/0.log" Jan 27 14:39:20 crc kubenswrapper[4914]: I0127 14:39:20.393747 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/kube-rbac-proxy/0.log" Jan 27 14:39:20 crc kubenswrapper[4914]: I0127 14:39:20.451647 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/frr-metrics/0.log" Jan 27 14:39:20 crc kubenswrapper[4914]: I0127 14:39:20.467254 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/kube-rbac-proxy-frr/0.log" Jan 27 14:39:20 crc kubenswrapper[4914]: I0127 14:39:20.641045 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/reloader/0.log" Jan 27 14:39:20 crc kubenswrapper[4914]: I0127 14:39:20.709142 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-gxxf7_f61fcde0-6647-4031-a3fa-4de22ba93d52/frr-k8s-webhook-server/0.log" Jan 27 14:39:20 crc kubenswrapper[4914]: I0127 14:39:20.905108 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6fb95778d6-wt7m7_77a4ae14-5fc9-461d-b886-b0dee70471ed/manager/0.log" Jan 27 14:39:21 crc kubenswrapper[4914]: I0127 14:39:21.178647 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6444797f4c-hnlrd_79390e5c-67e1-4a23-82c5-4c0bc346586b/webhook-server/0.log" Jan 27 14:39:21 crc kubenswrapper[4914]: I0127 14:39:21.239884 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gxlp7_0452f298-73ba-4192-9aba-307771710712/kube-rbac-proxy/0.log" Jan 27 14:39:21 crc kubenswrapper[4914]: I0127 14:39:21.962412 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v75q2_ed6bd514-9580-4226-927d-9bb52c0a6d76/frr/0.log" Jan 27 14:39:21 crc kubenswrapper[4914]: I0127 14:39:21.997604 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gxlp7_0452f298-73ba-4192-9aba-307771710712/speaker/0.log" Jan 27 14:39:39 crc kubenswrapper[4914]: I0127 14:39:39.732399 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ed0f4049-d67b-4534-a821-8cbefb969a63" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 27 14:39:39 crc kubenswrapper[4914]: I0127 14:39:39.733340 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ed0f4049-d67b-4534-a821-8cbefb969a63" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 27 14:39:41 crc kubenswrapper[4914]: I0127 14:39:41.766029 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv_c1c7b733-800f-4b1c-93bd-1f5bf1653a64/util/0.log" Jan 27 14:39:41 crc kubenswrapper[4914]: I0127 14:39:41.766180 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv_c1c7b733-800f-4b1c-93bd-1f5bf1653a64/util/0.log" Jan 27 14:39:41 crc kubenswrapper[4914]: I0127 14:39:41.784067 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv_c1c7b733-800f-4b1c-93bd-1f5bf1653a64/pull/0.log" Jan 27 14:39:41 crc kubenswrapper[4914]: I0127 14:39:41.784132 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv_c1c7b733-800f-4b1c-93bd-1f5bf1653a64/pull/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.076677 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv_c1c7b733-800f-4b1c-93bd-1f5bf1653a64/extract/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.089207 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4_48d0fa7e-8e07-40a1-813d-0eee2fcf2895/util/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.213481 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv_c1c7b733-800f-4b1c-93bd-1f5bf1653a64/pull/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.215230 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcltrnv_c1c7b733-800f-4b1c-93bd-1f5bf1653a64/util/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.275530 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4_48d0fa7e-8e07-40a1-813d-0eee2fcf2895/util/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.312094 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4_48d0fa7e-8e07-40a1-813d-0eee2fcf2895/pull/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.312229 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4_48d0fa7e-8e07-40a1-813d-0eee2fcf2895/pull/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.461176 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4_48d0fa7e-8e07-40a1-813d-0eee2fcf2895/pull/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.531292 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4_48d0fa7e-8e07-40a1-813d-0eee2fcf2895/util/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.557648 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tjprh_26b425ae-cbd3-4e25-becc-0a4c638599b2/extract-utilities/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.562809 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qx4h4_48d0fa7e-8e07-40a1-813d-0eee2fcf2895/extract/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.652950 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tjprh_26b425ae-cbd3-4e25-becc-0a4c638599b2/extract-utilities/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.674090 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tjprh_26b425ae-cbd3-4e25-becc-0a4c638599b2/extract-content/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.702083 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tjprh_26b425ae-cbd3-4e25-becc-0a4c638599b2/extract-content/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.853020 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tjprh_26b425ae-cbd3-4e25-becc-0a4c638599b2/extract-utilities/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.882453 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tjprh_26b425ae-cbd3-4e25-becc-0a4c638599b2/extract-content/0.log" Jan 27 14:39:42 crc kubenswrapper[4914]: I0127 14:39:42.964604 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bj7ql_8e3c9d63-36bb-4aee-a87f-f95798649571/extract-utilities/0.log" Jan 27 14:39:43 crc kubenswrapper[4914]: I0127 14:39:43.079985 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bj7ql_8e3c9d63-36bb-4aee-a87f-f95798649571/extract-utilities/0.log" Jan 27 14:39:43 crc kubenswrapper[4914]: I0127 14:39:43.099240 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tjprh_26b425ae-cbd3-4e25-becc-0a4c638599b2/registry-server/0.log" Jan 27 14:39:43 crc kubenswrapper[4914]: I0127 14:39:43.148621 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bj7ql_8e3c9d63-36bb-4aee-a87f-f95798649571/extract-content/0.log" Jan 27 14:39:43 crc kubenswrapper[4914]: I0127 14:39:43.177786 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bj7ql_8e3c9d63-36bb-4aee-a87f-f95798649571/extract-content/0.log" Jan 27 14:39:43 crc kubenswrapper[4914]: I0127 14:39:43.285003 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bj7ql_8e3c9d63-36bb-4aee-a87f-f95798649571/extract-content/0.log" Jan 27 14:39:43 crc kubenswrapper[4914]: I0127 14:39:43.306264 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bj7ql_8e3c9d63-36bb-4aee-a87f-f95798649571/extract-utilities/0.log" Jan 27 14:39:43 crc kubenswrapper[4914]: I0127 14:39:43.365671 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-bdpcf_7ea81bc4-78b8-4b11-a245-95037884bbde/marketplace-operator/0.log" Jan 27 14:39:43 crc kubenswrapper[4914]: I0127 14:39:43.592015 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-t69tv_0a47381b-bbaa-48f0-93d6-06bdd256dcc1/extract-utilities/0.log" Jan 27 14:39:43 crc kubenswrapper[4914]: I0127 14:39:43.720357 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-t69tv_0a47381b-bbaa-48f0-93d6-06bdd256dcc1/extract-utilities/0.log" Jan 27 14:39:43 crc kubenswrapper[4914]: I0127 14:39:43.739087 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-t69tv_0a47381b-bbaa-48f0-93d6-06bdd256dcc1/extract-content/0.log" Jan 27 14:39:43 crc kubenswrapper[4914]: I0127 14:39:43.797178 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bj7ql_8e3c9d63-36bb-4aee-a87f-f95798649571/registry-server/0.log" Jan 27 14:39:43 crc kubenswrapper[4914]: I0127 14:39:43.808978 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-t69tv_0a47381b-bbaa-48f0-93d6-06bdd256dcc1/extract-content/0.log" Jan 27 14:39:44 crc kubenswrapper[4914]: I0127 14:39:43.987946 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-t69tv_0a47381b-bbaa-48f0-93d6-06bdd256dcc1/extract-content/0.log" Jan 27 14:39:44 crc kubenswrapper[4914]: I0127 14:39:44.006072 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5ll2s_ba06a80d-4d83-4f32-bfc3-78c15cfebfe7/extract-utilities/0.log" Jan 27 14:39:44 crc kubenswrapper[4914]: I0127 14:39:44.374946 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-t69tv_0a47381b-bbaa-48f0-93d6-06bdd256dcc1/extract-utilities/0.log" Jan 27 14:39:44 crc kubenswrapper[4914]: I0127 14:39:44.493229 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-t69tv_0a47381b-bbaa-48f0-93d6-06bdd256dcc1/registry-server/0.log" Jan 27 14:39:44 crc kubenswrapper[4914]: I0127 14:39:44.511680 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5ll2s_ba06a80d-4d83-4f32-bfc3-78c15cfebfe7/extract-content/0.log" Jan 27 14:39:44 crc kubenswrapper[4914]: I0127 14:39:44.538320 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5ll2s_ba06a80d-4d83-4f32-bfc3-78c15cfebfe7/extract-content/0.log" Jan 27 14:39:44 crc kubenswrapper[4914]: I0127 14:39:44.561151 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5ll2s_ba06a80d-4d83-4f32-bfc3-78c15cfebfe7/extract-utilities/0.log" Jan 27 14:39:44 crc kubenswrapper[4914]: I0127 14:39:44.708431 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5ll2s_ba06a80d-4d83-4f32-bfc3-78c15cfebfe7/extract-content/0.log" Jan 27 14:39:44 crc kubenswrapper[4914]: I0127 14:39:44.723287 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5ll2s_ba06a80d-4d83-4f32-bfc3-78c15cfebfe7/extract-utilities/0.log" Jan 27 14:39:45 crc kubenswrapper[4914]: I0127 14:39:45.229523 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5ll2s_ba06a80d-4d83-4f32-bfc3-78c15cfebfe7/registry-server/0.log" Jan 27 14:40:14 crc kubenswrapper[4914]: I0127 14:40:14.338580 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vsbn9"] Jan 27 14:40:14 crc kubenswrapper[4914]: E0127 14:40:14.339613 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409ce055-91d8-49e9-aa80-63931fd1df77" containerName="container-00" Jan 27 14:40:14 crc kubenswrapper[4914]: I0127 14:40:14.339631 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="409ce055-91d8-49e9-aa80-63931fd1df77" containerName="container-00" Jan 27 14:40:14 crc kubenswrapper[4914]: I0127 14:40:14.344001 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="409ce055-91d8-49e9-aa80-63931fd1df77" containerName="container-00" Jan 27 14:40:14 crc kubenswrapper[4914]: I0127 14:40:14.346411 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:14 crc kubenswrapper[4914]: I0127 14:40:14.356675 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vsbn9"] Jan 27 14:40:14 crc kubenswrapper[4914]: I0127 14:40:14.419112 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e10963a0-5c27-4890-9150-d1615e629156-catalog-content\") pod \"redhat-operators-vsbn9\" (UID: \"e10963a0-5c27-4890-9150-d1615e629156\") " pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:14 crc kubenswrapper[4914]: I0127 14:40:14.419397 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e10963a0-5c27-4890-9150-d1615e629156-utilities\") pod \"redhat-operators-vsbn9\" (UID: \"e10963a0-5c27-4890-9150-d1615e629156\") " pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:14 crc kubenswrapper[4914]: I0127 14:40:14.419435 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gphqf\" (UniqueName: \"kubernetes.io/projected/e10963a0-5c27-4890-9150-d1615e629156-kube-api-access-gphqf\") pod \"redhat-operators-vsbn9\" (UID: \"e10963a0-5c27-4890-9150-d1615e629156\") " pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:14 crc kubenswrapper[4914]: I0127 14:40:14.521352 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e10963a0-5c27-4890-9150-d1615e629156-catalog-content\") pod \"redhat-operators-vsbn9\" (UID: \"e10963a0-5c27-4890-9150-d1615e629156\") " pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:14 crc kubenswrapper[4914]: I0127 14:40:14.521711 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e10963a0-5c27-4890-9150-d1615e629156-utilities\") pod \"redhat-operators-vsbn9\" (UID: \"e10963a0-5c27-4890-9150-d1615e629156\") " pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:14 crc kubenswrapper[4914]: I0127 14:40:14.521794 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gphqf\" (UniqueName: \"kubernetes.io/projected/e10963a0-5c27-4890-9150-d1615e629156-kube-api-access-gphqf\") pod \"redhat-operators-vsbn9\" (UID: \"e10963a0-5c27-4890-9150-d1615e629156\") " pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:14 crc kubenswrapper[4914]: I0127 14:40:14.522040 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e10963a0-5c27-4890-9150-d1615e629156-catalog-content\") pod \"redhat-operators-vsbn9\" (UID: \"e10963a0-5c27-4890-9150-d1615e629156\") " pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:14 crc kubenswrapper[4914]: I0127 14:40:14.522130 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e10963a0-5c27-4890-9150-d1615e629156-utilities\") pod \"redhat-operators-vsbn9\" (UID: \"e10963a0-5c27-4890-9150-d1615e629156\") " pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:14 crc kubenswrapper[4914]: I0127 14:40:14.544871 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gphqf\" (UniqueName: \"kubernetes.io/projected/e10963a0-5c27-4890-9150-d1615e629156-kube-api-access-gphqf\") pod \"redhat-operators-vsbn9\" (UID: \"e10963a0-5c27-4890-9150-d1615e629156\") " pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:14 crc kubenswrapper[4914]: I0127 14:40:14.683602 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:15 crc kubenswrapper[4914]: I0127 14:40:15.236969 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vsbn9"] Jan 27 14:40:15 crc kubenswrapper[4914]: I0127 14:40:15.589786 4914 generic.go:334] "Generic (PLEG): container finished" podID="e10963a0-5c27-4890-9150-d1615e629156" containerID="17abbb1640be3180b9b8b485f1b1be10fed0fe228424492342823e1de92fad82" exitCode=0 Jan 27 14:40:15 crc kubenswrapper[4914]: I0127 14:40:15.589944 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vsbn9" event={"ID":"e10963a0-5c27-4890-9150-d1615e629156","Type":"ContainerDied","Data":"17abbb1640be3180b9b8b485f1b1be10fed0fe228424492342823e1de92fad82"} Jan 27 14:40:15 crc kubenswrapper[4914]: I0127 14:40:15.590073 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vsbn9" event={"ID":"e10963a0-5c27-4890-9150-d1615e629156","Type":"ContainerStarted","Data":"6615c419757b121aac37fe3999aee5d606674d820603ad5132cf5e022dbeb91a"} Jan 27 14:40:15 crc kubenswrapper[4914]: I0127 14:40:15.591776 4914 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:40:17 crc kubenswrapper[4914]: I0127 14:40:17.608688 4914 generic.go:334] "Generic (PLEG): container finished" podID="e10963a0-5c27-4890-9150-d1615e629156" containerID="1e2bae77bb60866e1e74637a2294a743c62c69b0dcbb3b0fec024095c72a965b" exitCode=0 Jan 27 14:40:17 crc kubenswrapper[4914]: I0127 14:40:17.608792 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vsbn9" event={"ID":"e10963a0-5c27-4890-9150-d1615e629156","Type":"ContainerDied","Data":"1e2bae77bb60866e1e74637a2294a743c62c69b0dcbb3b0fec024095c72a965b"} Jan 27 14:40:18 crc kubenswrapper[4914]: I0127 14:40:18.620959 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vsbn9" event={"ID":"e10963a0-5c27-4890-9150-d1615e629156","Type":"ContainerStarted","Data":"64b2d540041447c48a92c412ec1a825f723a14be9bc33d7ae396654921213747"} Jan 27 14:40:18 crc kubenswrapper[4914]: I0127 14:40:18.644546 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vsbn9" podStartSLOduration=2.197282115 podStartE2EDuration="4.644527949s" podCreationTimestamp="2026-01-27 14:40:14 +0000 UTC" firstStartedPulling="2026-01-27 14:40:15.591545972 +0000 UTC m=+3373.903896057" lastFinishedPulling="2026-01-27 14:40:18.038791806 +0000 UTC m=+3376.351141891" observedRunningTime="2026-01-27 14:40:18.635893683 +0000 UTC m=+3376.948243768" watchObservedRunningTime="2026-01-27 14:40:18.644527949 +0000 UTC m=+3376.956878034" Jan 27 14:40:24 crc kubenswrapper[4914]: I0127 14:40:24.684449 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:24 crc kubenswrapper[4914]: I0127 14:40:24.685076 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:24 crc kubenswrapper[4914]: I0127 14:40:24.756465 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:25 crc kubenswrapper[4914]: I0127 14:40:25.740220 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:25 crc kubenswrapper[4914]: I0127 14:40:25.801931 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vsbn9"] Jan 27 14:40:27 crc kubenswrapper[4914]: I0127 14:40:27.704463 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vsbn9" podUID="e10963a0-5c27-4890-9150-d1615e629156" containerName="registry-server" containerID="cri-o://64b2d540041447c48a92c412ec1a825f723a14be9bc33d7ae396654921213747" gracePeriod=2 Jan 27 14:40:30 crc kubenswrapper[4914]: I0127 14:40:30.734451 4914 generic.go:334] "Generic (PLEG): container finished" podID="e10963a0-5c27-4890-9150-d1615e629156" containerID="64b2d540041447c48a92c412ec1a825f723a14be9bc33d7ae396654921213747" exitCode=0 Jan 27 14:40:30 crc kubenswrapper[4914]: I0127 14:40:30.734749 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vsbn9" event={"ID":"e10963a0-5c27-4890-9150-d1615e629156","Type":"ContainerDied","Data":"64b2d540041447c48a92c412ec1a825f723a14be9bc33d7ae396654921213747"} Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.073393 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.160530 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gphqf\" (UniqueName: \"kubernetes.io/projected/e10963a0-5c27-4890-9150-d1615e629156-kube-api-access-gphqf\") pod \"e10963a0-5c27-4890-9150-d1615e629156\" (UID: \"e10963a0-5c27-4890-9150-d1615e629156\") " Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.160695 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e10963a0-5c27-4890-9150-d1615e629156-catalog-content\") pod \"e10963a0-5c27-4890-9150-d1615e629156\" (UID: \"e10963a0-5c27-4890-9150-d1615e629156\") " Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.160805 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e10963a0-5c27-4890-9150-d1615e629156-utilities\") pod \"e10963a0-5c27-4890-9150-d1615e629156\" (UID: \"e10963a0-5c27-4890-9150-d1615e629156\") " Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.163885 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e10963a0-5c27-4890-9150-d1615e629156-utilities" (OuterVolumeSpecName: "utilities") pod "e10963a0-5c27-4890-9150-d1615e629156" (UID: "e10963a0-5c27-4890-9150-d1615e629156"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.174212 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e10963a0-5c27-4890-9150-d1615e629156-kube-api-access-gphqf" (OuterVolumeSpecName: "kube-api-access-gphqf") pod "e10963a0-5c27-4890-9150-d1615e629156" (UID: "e10963a0-5c27-4890-9150-d1615e629156"). InnerVolumeSpecName "kube-api-access-gphqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.263517 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e10963a0-5c27-4890-9150-d1615e629156-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.263570 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gphqf\" (UniqueName: \"kubernetes.io/projected/e10963a0-5c27-4890-9150-d1615e629156-kube-api-access-gphqf\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.321350 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e10963a0-5c27-4890-9150-d1615e629156-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e10963a0-5c27-4890-9150-d1615e629156" (UID: "e10963a0-5c27-4890-9150-d1615e629156"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.365121 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e10963a0-5c27-4890-9150-d1615e629156-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.745409 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vsbn9" event={"ID":"e10963a0-5c27-4890-9150-d1615e629156","Type":"ContainerDied","Data":"6615c419757b121aac37fe3999aee5d606674d820603ad5132cf5e022dbeb91a"} Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.745458 4914 scope.go:117] "RemoveContainer" containerID="64b2d540041447c48a92c412ec1a825f723a14be9bc33d7ae396654921213747" Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.745586 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vsbn9" Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.786198 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vsbn9"] Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.794481 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vsbn9"] Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.802573 4914 scope.go:117] "RemoveContainer" containerID="1e2bae77bb60866e1e74637a2294a743c62c69b0dcbb3b0fec024095c72a965b" Jan 27 14:40:31 crc kubenswrapper[4914]: I0127 14:40:31.843481 4914 scope.go:117] "RemoveContainer" containerID="17abbb1640be3180b9b8b485f1b1be10fed0fe228424492342823e1de92fad82" Jan 27 14:40:32 crc kubenswrapper[4914]: I0127 14:40:32.303947 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e10963a0-5c27-4890-9150-d1615e629156" path="/var/lib/kubelet/pods/e10963a0-5c27-4890-9150-d1615e629156/volumes" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.059470 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t5rs2"] Jan 27 14:40:53 crc kubenswrapper[4914]: E0127 14:40:53.060295 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10963a0-5c27-4890-9150-d1615e629156" containerName="extract-utilities" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.060308 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10963a0-5c27-4890-9150-d1615e629156" containerName="extract-utilities" Jan 27 14:40:53 crc kubenswrapper[4914]: E0127 14:40:53.060323 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10963a0-5c27-4890-9150-d1615e629156" containerName="registry-server" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.060329 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10963a0-5c27-4890-9150-d1615e629156" containerName="registry-server" Jan 27 14:40:53 crc kubenswrapper[4914]: E0127 14:40:53.060344 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10963a0-5c27-4890-9150-d1615e629156" containerName="extract-content" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.060351 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10963a0-5c27-4890-9150-d1615e629156" containerName="extract-content" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.060545 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="e10963a0-5c27-4890-9150-d1615e629156" containerName="registry-server" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.061754 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.083417 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t5rs2"] Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.145021 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-utilities\") pod \"community-operators-t5rs2\" (UID: \"b9bea1f1-821a-4fdb-af42-3e1759fad8d0\") " pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.145094 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf7s8\" (UniqueName: \"kubernetes.io/projected/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-kube-api-access-wf7s8\") pod \"community-operators-t5rs2\" (UID: \"b9bea1f1-821a-4fdb-af42-3e1759fad8d0\") " pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.145457 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-catalog-content\") pod \"community-operators-t5rs2\" (UID: \"b9bea1f1-821a-4fdb-af42-3e1759fad8d0\") " pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.247689 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-utilities\") pod \"community-operators-t5rs2\" (UID: \"b9bea1f1-821a-4fdb-af42-3e1759fad8d0\") " pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.247778 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf7s8\" (UniqueName: \"kubernetes.io/projected/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-kube-api-access-wf7s8\") pod \"community-operators-t5rs2\" (UID: \"b9bea1f1-821a-4fdb-af42-3e1759fad8d0\") " pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.247918 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-catalog-content\") pod \"community-operators-t5rs2\" (UID: \"b9bea1f1-821a-4fdb-af42-3e1759fad8d0\") " pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.248417 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-catalog-content\") pod \"community-operators-t5rs2\" (UID: \"b9bea1f1-821a-4fdb-af42-3e1759fad8d0\") " pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.248687 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-utilities\") pod \"community-operators-t5rs2\" (UID: \"b9bea1f1-821a-4fdb-af42-3e1759fad8d0\") " pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.253811 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tktd7"] Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.256199 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.269344 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tktd7"] Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.283687 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf7s8\" (UniqueName: \"kubernetes.io/projected/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-kube-api-access-wf7s8\") pod \"community-operators-t5rs2\" (UID: \"b9bea1f1-821a-4fdb-af42-3e1759fad8d0\") " pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.349344 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77aaf895-7da1-4c98-8c23-cd4e2c52430b-utilities\") pod \"redhat-marketplace-tktd7\" (UID: \"77aaf895-7da1-4c98-8c23-cd4e2c52430b\") " pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.349484 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77aaf895-7da1-4c98-8c23-cd4e2c52430b-catalog-content\") pod \"redhat-marketplace-tktd7\" (UID: \"77aaf895-7da1-4c98-8c23-cd4e2c52430b\") " pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.349921 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxv8n\" (UniqueName: \"kubernetes.io/projected/77aaf895-7da1-4c98-8c23-cd4e2c52430b-kube-api-access-vxv8n\") pod \"redhat-marketplace-tktd7\" (UID: \"77aaf895-7da1-4c98-8c23-cd4e2c52430b\") " pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.384517 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.452250 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxv8n\" (UniqueName: \"kubernetes.io/projected/77aaf895-7da1-4c98-8c23-cd4e2c52430b-kube-api-access-vxv8n\") pod \"redhat-marketplace-tktd7\" (UID: \"77aaf895-7da1-4c98-8c23-cd4e2c52430b\") " pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.452728 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77aaf895-7da1-4c98-8c23-cd4e2c52430b-utilities\") pod \"redhat-marketplace-tktd7\" (UID: \"77aaf895-7da1-4c98-8c23-cd4e2c52430b\") " pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.452783 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77aaf895-7da1-4c98-8c23-cd4e2c52430b-catalog-content\") pod \"redhat-marketplace-tktd7\" (UID: \"77aaf895-7da1-4c98-8c23-cd4e2c52430b\") " pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.453181 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77aaf895-7da1-4c98-8c23-cd4e2c52430b-utilities\") pod \"redhat-marketplace-tktd7\" (UID: \"77aaf895-7da1-4c98-8c23-cd4e2c52430b\") " pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.453214 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77aaf895-7da1-4c98-8c23-cd4e2c52430b-catalog-content\") pod \"redhat-marketplace-tktd7\" (UID: \"77aaf895-7da1-4c98-8c23-cd4e2c52430b\") " pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.479421 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxv8n\" (UniqueName: \"kubernetes.io/projected/77aaf895-7da1-4c98-8c23-cd4e2c52430b-kube-api-access-vxv8n\") pod \"redhat-marketplace-tktd7\" (UID: \"77aaf895-7da1-4c98-8c23-cd4e2c52430b\") " pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.620269 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:40:53 crc kubenswrapper[4914]: I0127 14:40:53.962246 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t5rs2"] Jan 27 14:40:54 crc kubenswrapper[4914]: W0127 14:40:54.131427 4914 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77aaf895_7da1_4c98_8c23_cd4e2c52430b.slice/crio-d6757607df4fe2d265b3380bd416c55f4b11a99f913a663445e37592acc51c91 WatchSource:0}: Error finding container d6757607df4fe2d265b3380bd416c55f4b11a99f913a663445e37592acc51c91: Status 404 returned error can't find the container with id d6757607df4fe2d265b3380bd416c55f4b11a99f913a663445e37592acc51c91 Jan 27 14:40:54 crc kubenswrapper[4914]: I0127 14:40:54.134392 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tktd7"] Jan 27 14:40:54 crc kubenswrapper[4914]: I0127 14:40:54.960650 4914 generic.go:334] "Generic (PLEG): container finished" podID="77aaf895-7da1-4c98-8c23-cd4e2c52430b" containerID="733909fe4916241000c1ecaecaba1ff6106db279d6c358452cc6a9f2e3418a92" exitCode=0 Jan 27 14:40:54 crc kubenswrapper[4914]: I0127 14:40:54.960687 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tktd7" event={"ID":"77aaf895-7da1-4c98-8c23-cd4e2c52430b","Type":"ContainerDied","Data":"733909fe4916241000c1ecaecaba1ff6106db279d6c358452cc6a9f2e3418a92"} Jan 27 14:40:54 crc kubenswrapper[4914]: I0127 14:40:54.961109 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tktd7" event={"ID":"77aaf895-7da1-4c98-8c23-cd4e2c52430b","Type":"ContainerStarted","Data":"d6757607df4fe2d265b3380bd416c55f4b11a99f913a663445e37592acc51c91"} Jan 27 14:40:54 crc kubenswrapper[4914]: I0127 14:40:54.964330 4914 generic.go:334] "Generic (PLEG): container finished" podID="b9bea1f1-821a-4fdb-af42-3e1759fad8d0" containerID="93b6de2084447e0306e1bf08c2d509486d0f739235ccc93519301e88dd000b52" exitCode=0 Jan 27 14:40:54 crc kubenswrapper[4914]: I0127 14:40:54.965413 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5rs2" event={"ID":"b9bea1f1-821a-4fdb-af42-3e1759fad8d0","Type":"ContainerDied","Data":"93b6de2084447e0306e1bf08c2d509486d0f739235ccc93519301e88dd000b52"} Jan 27 14:40:54 crc kubenswrapper[4914]: I0127 14:40:54.965460 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5rs2" event={"ID":"b9bea1f1-821a-4fdb-af42-3e1759fad8d0","Type":"ContainerStarted","Data":"23956a775c8c1cc909cf9f96250c5778919a5e691aca8461903de518ce963ac2"} Jan 27 14:40:55 crc kubenswrapper[4914]: I0127 14:40:55.979777 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5rs2" event={"ID":"b9bea1f1-821a-4fdb-af42-3e1759fad8d0","Type":"ContainerStarted","Data":"1329aaf0c28d5c9f5be0014d0d913a39819040851178a88d0ed3f65bdc090d30"} Jan 27 14:40:56 crc kubenswrapper[4914]: I0127 14:40:56.991371 4914 generic.go:334] "Generic (PLEG): container finished" podID="77aaf895-7da1-4c98-8c23-cd4e2c52430b" containerID="f41472429e35faaec3ace4b1671273d58a1f35d93341c7155cd937643fa0bfb8" exitCode=0 Jan 27 14:40:56 crc kubenswrapper[4914]: I0127 14:40:56.992252 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tktd7" event={"ID":"77aaf895-7da1-4c98-8c23-cd4e2c52430b","Type":"ContainerDied","Data":"f41472429e35faaec3ace4b1671273d58a1f35d93341c7155cd937643fa0bfb8"} Jan 27 14:40:57 crc kubenswrapper[4914]: E0127 14:40:57.035577 4914 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77aaf895_7da1_4c98_8c23_cd4e2c52430b.slice/crio-conmon-f41472429e35faaec3ace4b1671273d58a1f35d93341c7155cd937643fa0bfb8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77aaf895_7da1_4c98_8c23_cd4e2c52430b.slice/crio-f41472429e35faaec3ace4b1671273d58a1f35d93341c7155cd937643fa0bfb8.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:40:58 crc kubenswrapper[4914]: I0127 14:40:58.001602 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tktd7" event={"ID":"77aaf895-7da1-4c98-8c23-cd4e2c52430b","Type":"ContainerStarted","Data":"cbdb06feb1b515c8f9e5af5259bcd157889f2a3fb96bb2c05cb71f400e02faac"} Jan 27 14:40:58 crc kubenswrapper[4914]: I0127 14:40:58.003951 4914 generic.go:334] "Generic (PLEG): container finished" podID="b9bea1f1-821a-4fdb-af42-3e1759fad8d0" containerID="1329aaf0c28d5c9f5be0014d0d913a39819040851178a88d0ed3f65bdc090d30" exitCode=0 Jan 27 14:40:58 crc kubenswrapper[4914]: I0127 14:40:58.003998 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5rs2" event={"ID":"b9bea1f1-821a-4fdb-af42-3e1759fad8d0","Type":"ContainerDied","Data":"1329aaf0c28d5c9f5be0014d0d913a39819040851178a88d0ed3f65bdc090d30"} Jan 27 14:40:58 crc kubenswrapper[4914]: I0127 14:40:58.028392 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tktd7" podStartSLOduration=2.325786861 podStartE2EDuration="5.028371274s" podCreationTimestamp="2026-01-27 14:40:53 +0000 UTC" firstStartedPulling="2026-01-27 14:40:54.962936448 +0000 UTC m=+3413.275286553" lastFinishedPulling="2026-01-27 14:40:57.665520861 +0000 UTC m=+3415.977870966" observedRunningTime="2026-01-27 14:40:58.020211131 +0000 UTC m=+3416.332561206" watchObservedRunningTime="2026-01-27 14:40:58.028371274 +0000 UTC m=+3416.340721349" Jan 27 14:40:59 crc kubenswrapper[4914]: I0127 14:40:59.014710 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5rs2" event={"ID":"b9bea1f1-821a-4fdb-af42-3e1759fad8d0","Type":"ContainerStarted","Data":"6a3f3cdad336027c0aa78cb79399d10c91d10bff633f6341b3d65ae9e4dde61a"} Jan 27 14:40:59 crc kubenswrapper[4914]: I0127 14:40:59.040568 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t5rs2" podStartSLOduration=2.593557199 podStartE2EDuration="6.040547681s" podCreationTimestamp="2026-01-27 14:40:53 +0000 UTC" firstStartedPulling="2026-01-27 14:40:54.971795539 +0000 UTC m=+3413.284145624" lastFinishedPulling="2026-01-27 14:40:58.418786021 +0000 UTC m=+3416.731136106" observedRunningTime="2026-01-27 14:40:59.035583215 +0000 UTC m=+3417.347933320" watchObservedRunningTime="2026-01-27 14:40:59.040547681 +0000 UTC m=+3417.352897766" Jan 27 14:40:59 crc kubenswrapper[4914]: I0127 14:40:59.259996 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9fszh"] Jan 27 14:40:59 crc kubenswrapper[4914]: I0127 14:40:59.262746 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9fszh" Jan 27 14:40:59 crc kubenswrapper[4914]: I0127 14:40:59.284514 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9fszh"] Jan 27 14:40:59 crc kubenswrapper[4914]: I0127 14:40:59.372781 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8sj8\" (UniqueName: \"kubernetes.io/projected/e0f63fcd-3cec-4940-b5e0-8be812e8d86c-kube-api-access-c8sj8\") pod \"certified-operators-9fszh\" (UID: \"e0f63fcd-3cec-4940-b5e0-8be812e8d86c\") " pod="openshift-marketplace/certified-operators-9fszh" Jan 27 14:40:59 crc kubenswrapper[4914]: I0127 14:40:59.373014 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0f63fcd-3cec-4940-b5e0-8be812e8d86c-utilities\") pod \"certified-operators-9fszh\" (UID: \"e0f63fcd-3cec-4940-b5e0-8be812e8d86c\") " pod="openshift-marketplace/certified-operators-9fszh" Jan 27 14:40:59 crc kubenswrapper[4914]: I0127 14:40:59.373090 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0f63fcd-3cec-4940-b5e0-8be812e8d86c-catalog-content\") pod \"certified-operators-9fszh\" (UID: \"e0f63fcd-3cec-4940-b5e0-8be812e8d86c\") " pod="openshift-marketplace/certified-operators-9fszh" Jan 27 14:40:59 crc kubenswrapper[4914]: I0127 14:40:59.474541 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0f63fcd-3cec-4940-b5e0-8be812e8d86c-utilities\") pod \"certified-operators-9fszh\" (UID: \"e0f63fcd-3cec-4940-b5e0-8be812e8d86c\") " pod="openshift-marketplace/certified-operators-9fszh" Jan 27 14:40:59 crc kubenswrapper[4914]: I0127 14:40:59.474604 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0f63fcd-3cec-4940-b5e0-8be812e8d86c-catalog-content\") pod \"certified-operators-9fszh\" (UID: \"e0f63fcd-3cec-4940-b5e0-8be812e8d86c\") " pod="openshift-marketplace/certified-operators-9fszh" Jan 27 14:40:59 crc kubenswrapper[4914]: I0127 14:40:59.474669 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8sj8\" (UniqueName: \"kubernetes.io/projected/e0f63fcd-3cec-4940-b5e0-8be812e8d86c-kube-api-access-c8sj8\") pod \"certified-operators-9fszh\" (UID: \"e0f63fcd-3cec-4940-b5e0-8be812e8d86c\") " pod="openshift-marketplace/certified-operators-9fszh" Jan 27 14:40:59 crc kubenswrapper[4914]: I0127 14:40:59.475481 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0f63fcd-3cec-4940-b5e0-8be812e8d86c-utilities\") pod \"certified-operators-9fszh\" (UID: \"e0f63fcd-3cec-4940-b5e0-8be812e8d86c\") " pod="openshift-marketplace/certified-operators-9fszh" Jan 27 14:40:59 crc kubenswrapper[4914]: I0127 14:40:59.475694 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0f63fcd-3cec-4940-b5e0-8be812e8d86c-catalog-content\") pod \"certified-operators-9fszh\" (UID: \"e0f63fcd-3cec-4940-b5e0-8be812e8d86c\") " pod="openshift-marketplace/certified-operators-9fszh" Jan 27 14:40:59 crc kubenswrapper[4914]: I0127 14:40:59.507663 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8sj8\" (UniqueName: \"kubernetes.io/projected/e0f63fcd-3cec-4940-b5e0-8be812e8d86c-kube-api-access-c8sj8\") pod \"certified-operators-9fszh\" (UID: \"e0f63fcd-3cec-4940-b5e0-8be812e8d86c\") " pod="openshift-marketplace/certified-operators-9fszh" Jan 27 14:40:59 crc kubenswrapper[4914]: I0127 14:40:59.579887 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9fszh" Jan 27 14:41:00 crc kubenswrapper[4914]: I0127 14:41:00.083166 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9fszh"] Jan 27 14:41:01 crc kubenswrapper[4914]: I0127 14:41:01.039188 4914 generic.go:334] "Generic (PLEG): container finished" podID="e0f63fcd-3cec-4940-b5e0-8be812e8d86c" containerID="ea7bd4d4ef7a92278e23942c89821712107fe7b6e41822ba1771249e85a6aeb9" exitCode=0 Jan 27 14:41:01 crc kubenswrapper[4914]: I0127 14:41:01.039564 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fszh" event={"ID":"e0f63fcd-3cec-4940-b5e0-8be812e8d86c","Type":"ContainerDied","Data":"ea7bd4d4ef7a92278e23942c89821712107fe7b6e41822ba1771249e85a6aeb9"} Jan 27 14:41:01 crc kubenswrapper[4914]: I0127 14:41:01.039606 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fszh" event={"ID":"e0f63fcd-3cec-4940-b5e0-8be812e8d86c","Type":"ContainerStarted","Data":"be9333778c7817faf07616e03e4645e3fb5bb7ed2e36fa4fe048e70baf7e86ea"} Jan 27 14:41:03 crc kubenswrapper[4914]: I0127 14:41:03.385129 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:41:03 crc kubenswrapper[4914]: I0127 14:41:03.385619 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:41:03 crc kubenswrapper[4914]: I0127 14:41:03.429886 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:41:03 crc kubenswrapper[4914]: I0127 14:41:03.621166 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:41:03 crc kubenswrapper[4914]: I0127 14:41:03.621291 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:41:03 crc kubenswrapper[4914]: I0127 14:41:03.675728 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:41:04 crc kubenswrapper[4914]: I0127 14:41:04.124673 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:41:04 crc kubenswrapper[4914]: I0127 14:41:04.140062 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:41:06 crc kubenswrapper[4914]: I0127 14:41:06.095058 4914 generic.go:334] "Generic (PLEG): container finished" podID="e0f63fcd-3cec-4940-b5e0-8be812e8d86c" containerID="3fd8ea548ef384b92a8dd9eb6c5e9f267c41d911d6b05e67655ada60c68e8a72" exitCode=0 Jan 27 14:41:06 crc kubenswrapper[4914]: I0127 14:41:06.095286 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fszh" event={"ID":"e0f63fcd-3cec-4940-b5e0-8be812e8d86c","Type":"ContainerDied","Data":"3fd8ea548ef384b92a8dd9eb6c5e9f267c41d911d6b05e67655ada60c68e8a72"} Jan 27 14:41:06 crc kubenswrapper[4914]: I0127 14:41:06.844702 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t5rs2"] Jan 27 14:41:06 crc kubenswrapper[4914]: I0127 14:41:06.845226 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t5rs2" podUID="b9bea1f1-821a-4fdb-af42-3e1759fad8d0" containerName="registry-server" containerID="cri-o://6a3f3cdad336027c0aa78cb79399d10c91d10bff633f6341b3d65ae9e4dde61a" gracePeriod=2 Jan 27 14:41:07 crc kubenswrapper[4914]: I0127 14:41:07.108848 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fszh" event={"ID":"e0f63fcd-3cec-4940-b5e0-8be812e8d86c","Type":"ContainerStarted","Data":"f212ca4fbc51e822c39037b66477c565f5ac149b5b5ce9fc2d67216b22794b9c"} Jan 27 14:41:07 crc kubenswrapper[4914]: I0127 14:41:07.135914 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9fszh" podStartSLOduration=2.625802721 podStartE2EDuration="8.135895361s" podCreationTimestamp="2026-01-27 14:40:59 +0000 UTC" firstStartedPulling="2026-01-27 14:41:01.041774671 +0000 UTC m=+3419.354124756" lastFinishedPulling="2026-01-27 14:41:06.551867311 +0000 UTC m=+3424.864217396" observedRunningTime="2026-01-27 14:41:07.130896245 +0000 UTC m=+3425.443246330" watchObservedRunningTime="2026-01-27 14:41:07.135895361 +0000 UTC m=+3425.448245446" Jan 27 14:41:07 crc kubenswrapper[4914]: I0127 14:41:07.641341 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tktd7"] Jan 27 14:41:07 crc kubenswrapper[4914]: I0127 14:41:07.642130 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tktd7" podUID="77aaf895-7da1-4c98-8c23-cd4e2c52430b" containerName="registry-server" containerID="cri-o://cbdb06feb1b515c8f9e5af5259bcd157889f2a3fb96bb2c05cb71f400e02faac" gracePeriod=2 Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.125336 4914 generic.go:334] "Generic (PLEG): container finished" podID="77aaf895-7da1-4c98-8c23-cd4e2c52430b" containerID="cbdb06feb1b515c8f9e5af5259bcd157889f2a3fb96bb2c05cb71f400e02faac" exitCode=0 Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.125416 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tktd7" event={"ID":"77aaf895-7da1-4c98-8c23-cd4e2c52430b","Type":"ContainerDied","Data":"cbdb06feb1b515c8f9e5af5259bcd157889f2a3fb96bb2c05cb71f400e02faac"} Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.128846 4914 generic.go:334] "Generic (PLEG): container finished" podID="b9bea1f1-821a-4fdb-af42-3e1759fad8d0" containerID="6a3f3cdad336027c0aa78cb79399d10c91d10bff633f6341b3d65ae9e4dde61a" exitCode=0 Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.128866 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5rs2" event={"ID":"b9bea1f1-821a-4fdb-af42-3e1759fad8d0","Type":"ContainerDied","Data":"6a3f3cdad336027c0aa78cb79399d10c91d10bff633f6341b3d65ae9e4dde61a"} Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.746191 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.875106 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-utilities\") pod \"b9bea1f1-821a-4fdb-af42-3e1759fad8d0\" (UID: \"b9bea1f1-821a-4fdb-af42-3e1759fad8d0\") " Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.875185 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf7s8\" (UniqueName: \"kubernetes.io/projected/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-kube-api-access-wf7s8\") pod \"b9bea1f1-821a-4fdb-af42-3e1759fad8d0\" (UID: \"b9bea1f1-821a-4fdb-af42-3e1759fad8d0\") " Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.875367 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-catalog-content\") pod \"b9bea1f1-821a-4fdb-af42-3e1759fad8d0\" (UID: \"b9bea1f1-821a-4fdb-af42-3e1759fad8d0\") " Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.879201 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-utilities" (OuterVolumeSpecName: "utilities") pod "b9bea1f1-821a-4fdb-af42-3e1759fad8d0" (UID: "b9bea1f1-821a-4fdb-af42-3e1759fad8d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.883297 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-kube-api-access-wf7s8" (OuterVolumeSpecName: "kube-api-access-wf7s8") pod "b9bea1f1-821a-4fdb-af42-3e1759fad8d0" (UID: "b9bea1f1-821a-4fdb-af42-3e1759fad8d0"). InnerVolumeSpecName "kube-api-access-wf7s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.921752 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.929328 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9bea1f1-821a-4fdb-af42-3e1759fad8d0" (UID: "b9bea1f1-821a-4fdb-af42-3e1759fad8d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.977350 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77aaf895-7da1-4c98-8c23-cd4e2c52430b-catalog-content\") pod \"77aaf895-7da1-4c98-8c23-cd4e2c52430b\" (UID: \"77aaf895-7da1-4c98-8c23-cd4e2c52430b\") " Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.977408 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxv8n\" (UniqueName: \"kubernetes.io/projected/77aaf895-7da1-4c98-8c23-cd4e2c52430b-kube-api-access-vxv8n\") pod \"77aaf895-7da1-4c98-8c23-cd4e2c52430b\" (UID: \"77aaf895-7da1-4c98-8c23-cd4e2c52430b\") " Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.977657 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77aaf895-7da1-4c98-8c23-cd4e2c52430b-utilities\") pod \"77aaf895-7da1-4c98-8c23-cd4e2c52430b\" (UID: \"77aaf895-7da1-4c98-8c23-cd4e2c52430b\") " Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.978253 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.978279 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.978293 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf7s8\" (UniqueName: \"kubernetes.io/projected/b9bea1f1-821a-4fdb-af42-3e1759fad8d0-kube-api-access-wf7s8\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.978548 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77aaf895-7da1-4c98-8c23-cd4e2c52430b-utilities" (OuterVolumeSpecName: "utilities") pod "77aaf895-7da1-4c98-8c23-cd4e2c52430b" (UID: "77aaf895-7da1-4c98-8c23-cd4e2c52430b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.981240 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77aaf895-7da1-4c98-8c23-cd4e2c52430b-kube-api-access-vxv8n" (OuterVolumeSpecName: "kube-api-access-vxv8n") pod "77aaf895-7da1-4c98-8c23-cd4e2c52430b" (UID: "77aaf895-7da1-4c98-8c23-cd4e2c52430b"). InnerVolumeSpecName "kube-api-access-vxv8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:41:08 crc kubenswrapper[4914]: I0127 14:41:08.999674 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77aaf895-7da1-4c98-8c23-cd4e2c52430b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77aaf895-7da1-4c98-8c23-cd4e2c52430b" (UID: "77aaf895-7da1-4c98-8c23-cd4e2c52430b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.079472 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77aaf895-7da1-4c98-8c23-cd4e2c52430b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.079502 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxv8n\" (UniqueName: \"kubernetes.io/projected/77aaf895-7da1-4c98-8c23-cd4e2c52430b-kube-api-access-vxv8n\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.079513 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77aaf895-7da1-4c98-8c23-cd4e2c52430b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.142854 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tktd7" event={"ID":"77aaf895-7da1-4c98-8c23-cd4e2c52430b","Type":"ContainerDied","Data":"d6757607df4fe2d265b3380bd416c55f4b11a99f913a663445e37592acc51c91"} Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.142946 4914 scope.go:117] "RemoveContainer" containerID="cbdb06feb1b515c8f9e5af5259bcd157889f2a3fb96bb2c05cb71f400e02faac" Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.144635 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tktd7" Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.147091 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5rs2" event={"ID":"b9bea1f1-821a-4fdb-af42-3e1759fad8d0","Type":"ContainerDied","Data":"23956a775c8c1cc909cf9f96250c5778919a5e691aca8461903de518ce963ac2"} Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.147189 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5rs2" Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.164996 4914 scope.go:117] "RemoveContainer" containerID="f41472429e35faaec3ace4b1671273d58a1f35d93341c7155cd937643fa0bfb8" Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.198262 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tktd7"] Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.204373 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tktd7"] Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.214699 4914 scope.go:117] "RemoveContainer" containerID="733909fe4916241000c1ecaecaba1ff6106db279d6c358452cc6a9f2e3418a92" Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.214820 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t5rs2"] Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.224907 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t5rs2"] Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.270644 4914 scope.go:117] "RemoveContainer" containerID="6a3f3cdad336027c0aa78cb79399d10c91d10bff633f6341b3d65ae9e4dde61a" Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.315039 4914 scope.go:117] "RemoveContainer" containerID="1329aaf0c28d5c9f5be0014d0d913a39819040851178a88d0ed3f65bdc090d30" Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.335288 4914 scope.go:117] "RemoveContainer" containerID="93b6de2084447e0306e1bf08c2d509486d0f739235ccc93519301e88dd000b52" Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.580570 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9fszh" Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.581143 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9fszh" Jan 27 14:41:09 crc kubenswrapper[4914]: I0127 14:41:09.633261 4914 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9fszh" Jan 27 14:41:10 crc kubenswrapper[4914]: I0127 14:41:10.304973 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77aaf895-7da1-4c98-8c23-cd4e2c52430b" path="/var/lib/kubelet/pods/77aaf895-7da1-4c98-8c23-cd4e2c52430b/volumes" Jan 27 14:41:10 crc kubenswrapper[4914]: I0127 14:41:10.306060 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9bea1f1-821a-4fdb-af42-3e1759fad8d0" path="/var/lib/kubelet/pods/b9bea1f1-821a-4fdb-af42-3e1759fad8d0/volumes" Jan 27 14:41:19 crc kubenswrapper[4914]: I0127 14:41:19.628935 4914 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9fszh" Jan 27 14:41:19 crc kubenswrapper[4914]: I0127 14:41:19.698480 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9fszh"] Jan 27 14:41:19 crc kubenswrapper[4914]: I0127 14:41:19.747567 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tjprh"] Jan 27 14:41:19 crc kubenswrapper[4914]: I0127 14:41:19.747976 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tjprh" podUID="26b425ae-cbd3-4e25-becc-0a4c638599b2" containerName="registry-server" containerID="cri-o://7bff98b604a5a61cee0639a13aec272e23607b7ba10e15b911df26498e45928c" gracePeriod=2 Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.188188 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.257740 4914 generic.go:334] "Generic (PLEG): container finished" podID="26b425ae-cbd3-4e25-becc-0a4c638599b2" containerID="7bff98b604a5a61cee0639a13aec272e23607b7ba10e15b911df26498e45928c" exitCode=0 Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.257962 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjprh" event={"ID":"26b425ae-cbd3-4e25-becc-0a4c638599b2","Type":"ContainerDied","Data":"7bff98b604a5a61cee0639a13aec272e23607b7ba10e15b911df26498e45928c"} Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.258016 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjprh" event={"ID":"26b425ae-cbd3-4e25-becc-0a4c638599b2","Type":"ContainerDied","Data":"55aa0a15321b1c1ed0d6c47317d0389f05bc42d004c38a9db8b507a915f97d9d"} Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.258038 4914 scope.go:117] "RemoveContainer" containerID="7bff98b604a5a61cee0639a13aec272e23607b7ba10e15b911df26498e45928c" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.258185 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjprh" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.292976 4914 scope.go:117] "RemoveContainer" containerID="1863fd41355c2888682e458871f2d3f04282770b10ffabf1a7f1bfa5ee099087" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.321718 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26b425ae-cbd3-4e25-becc-0a4c638599b2-catalog-content\") pod \"26b425ae-cbd3-4e25-becc-0a4c638599b2\" (UID: \"26b425ae-cbd3-4e25-becc-0a4c638599b2\") " Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.321784 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26b425ae-cbd3-4e25-becc-0a4c638599b2-utilities\") pod \"26b425ae-cbd3-4e25-becc-0a4c638599b2\" (UID: \"26b425ae-cbd3-4e25-becc-0a4c638599b2\") " Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.321986 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4plm\" (UniqueName: \"kubernetes.io/projected/26b425ae-cbd3-4e25-becc-0a4c638599b2-kube-api-access-s4plm\") pod \"26b425ae-cbd3-4e25-becc-0a4c638599b2\" (UID: \"26b425ae-cbd3-4e25-becc-0a4c638599b2\") " Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.323799 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26b425ae-cbd3-4e25-becc-0a4c638599b2-utilities" (OuterVolumeSpecName: "utilities") pod "26b425ae-cbd3-4e25-becc-0a4c638599b2" (UID: "26b425ae-cbd3-4e25-becc-0a4c638599b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.332080 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b425ae-cbd3-4e25-becc-0a4c638599b2-kube-api-access-s4plm" (OuterVolumeSpecName: "kube-api-access-s4plm") pod "26b425ae-cbd3-4e25-becc-0a4c638599b2" (UID: "26b425ae-cbd3-4e25-becc-0a4c638599b2"). InnerVolumeSpecName "kube-api-access-s4plm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.339291 4914 scope.go:117] "RemoveContainer" containerID="34910911cb364e40b5e668a554b804d78e613644fa00c9b6b6665f801447bc67" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.398976 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26b425ae-cbd3-4e25-becc-0a4c638599b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26b425ae-cbd3-4e25-becc-0a4c638599b2" (UID: "26b425ae-cbd3-4e25-becc-0a4c638599b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.424676 4914 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26b425ae-cbd3-4e25-becc-0a4c638599b2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.424711 4914 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26b425ae-cbd3-4e25-becc-0a4c638599b2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.424720 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4plm\" (UniqueName: \"kubernetes.io/projected/26b425ae-cbd3-4e25-becc-0a4c638599b2-kube-api-access-s4plm\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.455720 4914 scope.go:117] "RemoveContainer" containerID="7bff98b604a5a61cee0639a13aec272e23607b7ba10e15b911df26498e45928c" Jan 27 14:41:20 crc kubenswrapper[4914]: E0127 14:41:20.456492 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bff98b604a5a61cee0639a13aec272e23607b7ba10e15b911df26498e45928c\": container with ID starting with 7bff98b604a5a61cee0639a13aec272e23607b7ba10e15b911df26498e45928c not found: ID does not exist" containerID="7bff98b604a5a61cee0639a13aec272e23607b7ba10e15b911df26498e45928c" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.456523 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bff98b604a5a61cee0639a13aec272e23607b7ba10e15b911df26498e45928c"} err="failed to get container status \"7bff98b604a5a61cee0639a13aec272e23607b7ba10e15b911df26498e45928c\": rpc error: code = NotFound desc = could not find container \"7bff98b604a5a61cee0639a13aec272e23607b7ba10e15b911df26498e45928c\": container with ID starting with 7bff98b604a5a61cee0639a13aec272e23607b7ba10e15b911df26498e45928c not found: ID does not exist" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.456544 4914 scope.go:117] "RemoveContainer" containerID="1863fd41355c2888682e458871f2d3f04282770b10ffabf1a7f1bfa5ee099087" Jan 27 14:41:20 crc kubenswrapper[4914]: E0127 14:41:20.456952 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1863fd41355c2888682e458871f2d3f04282770b10ffabf1a7f1bfa5ee099087\": container with ID starting with 1863fd41355c2888682e458871f2d3f04282770b10ffabf1a7f1bfa5ee099087 not found: ID does not exist" containerID="1863fd41355c2888682e458871f2d3f04282770b10ffabf1a7f1bfa5ee099087" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.456972 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1863fd41355c2888682e458871f2d3f04282770b10ffabf1a7f1bfa5ee099087"} err="failed to get container status \"1863fd41355c2888682e458871f2d3f04282770b10ffabf1a7f1bfa5ee099087\": rpc error: code = NotFound desc = could not find container \"1863fd41355c2888682e458871f2d3f04282770b10ffabf1a7f1bfa5ee099087\": container with ID starting with 1863fd41355c2888682e458871f2d3f04282770b10ffabf1a7f1bfa5ee099087 not found: ID does not exist" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.456986 4914 scope.go:117] "RemoveContainer" containerID="34910911cb364e40b5e668a554b804d78e613644fa00c9b6b6665f801447bc67" Jan 27 14:41:20 crc kubenswrapper[4914]: E0127 14:41:20.457538 4914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34910911cb364e40b5e668a554b804d78e613644fa00c9b6b6665f801447bc67\": container with ID starting with 34910911cb364e40b5e668a554b804d78e613644fa00c9b6b6665f801447bc67 not found: ID does not exist" containerID="34910911cb364e40b5e668a554b804d78e613644fa00c9b6b6665f801447bc67" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.457560 4914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34910911cb364e40b5e668a554b804d78e613644fa00c9b6b6665f801447bc67"} err="failed to get container status \"34910911cb364e40b5e668a554b804d78e613644fa00c9b6b6665f801447bc67\": rpc error: code = NotFound desc = could not find container \"34910911cb364e40b5e668a554b804d78e613644fa00c9b6b6665f801447bc67\": container with ID starting with 34910911cb364e40b5e668a554b804d78e613644fa00c9b6b6665f801447bc67 not found: ID does not exist" Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.624041 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tjprh"] Jan 27 14:41:20 crc kubenswrapper[4914]: I0127 14:41:20.633337 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tjprh"] Jan 27 14:41:22 crc kubenswrapper[4914]: I0127 14:41:22.304963 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b425ae-cbd3-4e25-becc-0a4c638599b2" path="/var/lib/kubelet/pods/26b425ae-cbd3-4e25-becc-0a4c638599b2/volumes" Jan 27 14:41:31 crc kubenswrapper[4914]: I0127 14:41:31.356213 4914 generic.go:334] "Generic (PLEG): container finished" podID="d5cc861b-5221-436b-8fb2-82b729fd4334" containerID="b391631083b93c23aa391c5697a41b0d5ceabf4d65a48510346e65490203239a" exitCode=0 Jan 27 14:41:31 crc kubenswrapper[4914]: I0127 14:41:31.356583 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p5z5z/must-gather-mqs9g" event={"ID":"d5cc861b-5221-436b-8fb2-82b729fd4334","Type":"ContainerDied","Data":"b391631083b93c23aa391c5697a41b0d5ceabf4d65a48510346e65490203239a"} Jan 27 14:41:31 crc kubenswrapper[4914]: I0127 14:41:31.357156 4914 scope.go:117] "RemoveContainer" containerID="b391631083b93c23aa391c5697a41b0d5ceabf4d65a48510346e65490203239a" Jan 27 14:41:32 crc kubenswrapper[4914]: I0127 14:41:32.405099 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p5z5z_must-gather-mqs9g_d5cc861b-5221-436b-8fb2-82b729fd4334/gather/0.log" Jan 27 14:41:37 crc kubenswrapper[4914]: I0127 14:41:37.691364 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:41:37 crc kubenswrapper[4914]: I0127 14:41:37.691851 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:41:40 crc kubenswrapper[4914]: I0127 14:41:40.072272 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p5z5z/must-gather-mqs9g"] Jan 27 14:41:40 crc kubenswrapper[4914]: I0127 14:41:40.072950 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-p5z5z/must-gather-mqs9g" podUID="d5cc861b-5221-436b-8fb2-82b729fd4334" containerName="copy" containerID="cri-o://a67d191197532e1612b9dc961b64c8029760ac385ae61f3b522cd230b585648b" gracePeriod=2 Jan 27 14:41:40 crc kubenswrapper[4914]: I0127 14:41:40.127990 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p5z5z/must-gather-mqs9g"] Jan 27 14:41:40 crc kubenswrapper[4914]: I0127 14:41:40.458318 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p5z5z_must-gather-mqs9g_d5cc861b-5221-436b-8fb2-82b729fd4334/copy/0.log" Jan 27 14:41:40 crc kubenswrapper[4914]: I0127 14:41:40.459824 4914 generic.go:334] "Generic (PLEG): container finished" podID="d5cc861b-5221-436b-8fb2-82b729fd4334" containerID="a67d191197532e1612b9dc961b64c8029760ac385ae61f3b522cd230b585648b" exitCode=143 Jan 27 14:41:40 crc kubenswrapper[4914]: I0127 14:41:40.459923 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b75a0c826a8b664b68bf3fcdc735682dc75395d88de7bec3cf6af871b8b78a73" Jan 27 14:41:40 crc kubenswrapper[4914]: I0127 14:41:40.525587 4914 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p5z5z_must-gather-mqs9g_d5cc861b-5221-436b-8fb2-82b729fd4334/copy/0.log" Jan 27 14:41:40 crc kubenswrapper[4914]: I0127 14:41:40.525967 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p5z5z/must-gather-mqs9g" Jan 27 14:41:40 crc kubenswrapper[4914]: I0127 14:41:40.631676 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5cc861b-5221-436b-8fb2-82b729fd4334-must-gather-output\") pod \"d5cc861b-5221-436b-8fb2-82b729fd4334\" (UID: \"d5cc861b-5221-436b-8fb2-82b729fd4334\") " Jan 27 14:41:40 crc kubenswrapper[4914]: I0127 14:41:40.631798 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl2pc\" (UniqueName: \"kubernetes.io/projected/d5cc861b-5221-436b-8fb2-82b729fd4334-kube-api-access-tl2pc\") pod \"d5cc861b-5221-436b-8fb2-82b729fd4334\" (UID: \"d5cc861b-5221-436b-8fb2-82b729fd4334\") " Jan 27 14:41:40 crc kubenswrapper[4914]: I0127 14:41:40.640743 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5cc861b-5221-436b-8fb2-82b729fd4334-kube-api-access-tl2pc" (OuterVolumeSpecName: "kube-api-access-tl2pc") pod "d5cc861b-5221-436b-8fb2-82b729fd4334" (UID: "d5cc861b-5221-436b-8fb2-82b729fd4334"). InnerVolumeSpecName "kube-api-access-tl2pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:41:40 crc kubenswrapper[4914]: I0127 14:41:40.735175 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl2pc\" (UniqueName: \"kubernetes.io/projected/d5cc861b-5221-436b-8fb2-82b729fd4334-kube-api-access-tl2pc\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:40 crc kubenswrapper[4914]: I0127 14:41:40.809499 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5cc861b-5221-436b-8fb2-82b729fd4334-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d5cc861b-5221-436b-8fb2-82b729fd4334" (UID: "d5cc861b-5221-436b-8fb2-82b729fd4334"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:41:40 crc kubenswrapper[4914]: I0127 14:41:40.836582 4914 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5cc861b-5221-436b-8fb2-82b729fd4334-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:41 crc kubenswrapper[4914]: I0127 14:41:41.466816 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p5z5z/must-gather-mqs9g" Jan 27 14:41:42 crc kubenswrapper[4914]: I0127 14:41:42.304400 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5cc861b-5221-436b-8fb2-82b729fd4334" path="/var/lib/kubelet/pods/d5cc861b-5221-436b-8fb2-82b729fd4334/volumes" Jan 27 14:42:07 crc kubenswrapper[4914]: I0127 14:42:07.691403 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:42:07 crc kubenswrapper[4914]: I0127 14:42:07.692230 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:42:16 crc kubenswrapper[4914]: I0127 14:42:16.805708 4914 scope.go:117] "RemoveContainer" containerID="a67d191197532e1612b9dc961b64c8029760ac385ae61f3b522cd230b585648b" Jan 27 14:42:16 crc kubenswrapper[4914]: I0127 14:42:16.837085 4914 scope.go:117] "RemoveContainer" containerID="b391631083b93c23aa391c5697a41b0d5ceabf4d65a48510346e65490203239a" Jan 27 14:42:37 crc kubenswrapper[4914]: I0127 14:42:37.691454 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:42:37 crc kubenswrapper[4914]: I0127 14:42:37.692077 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:42:37 crc kubenswrapper[4914]: I0127 14:42:37.692136 4914 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" Jan 27 14:42:37 crc kubenswrapper[4914]: I0127 14:42:37.692950 4914 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40837c9b1c5142e5ba99883dc8307f8fc3c06f7346b9663b379e8a06df6b926f"} pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:42:37 crc kubenswrapper[4914]: I0127 14:42:37.693008 4914 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" containerID="cri-o://40837c9b1c5142e5ba99883dc8307f8fc3c06f7346b9663b379e8a06df6b926f" gracePeriod=600 Jan 27 14:42:37 crc kubenswrapper[4914]: I0127 14:42:37.957076 4914 generic.go:334] "Generic (PLEG): container finished" podID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerID="40837c9b1c5142e5ba99883dc8307f8fc3c06f7346b9663b379e8a06df6b926f" exitCode=0 Jan 27 14:42:37 crc kubenswrapper[4914]: I0127 14:42:37.957120 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerDied","Data":"40837c9b1c5142e5ba99883dc8307f8fc3c06f7346b9663b379e8a06df6b926f"} Jan 27 14:42:37 crc kubenswrapper[4914]: I0127 14:42:37.957756 4914 scope.go:117] "RemoveContainer" containerID="db5422daa199c7cf09b129d276f897990a9abe9b30683796d01f534589972b12" Jan 27 14:42:38 crc kubenswrapper[4914]: I0127 14:42:38.971655 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" event={"ID":"bdf2dcff-9caa-45ba-98a8-0a00861bd11a","Type":"ContainerStarted","Data":"dd4250f690be036ee6eac0ab7cc333221bcb01bb4c06d7ed8f8b4a6947917346"} Jan 27 14:43:16 crc kubenswrapper[4914]: I0127 14:43:16.959912 4914 scope.go:117] "RemoveContainer" containerID="d3ef800887fd60cb8c2bfe5eeb5f7669a4380ead15bc62bd7b89a682269267fb" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.152908 4914 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v"] Jan 27 14:45:00 crc kubenswrapper[4914]: E0127 14:45:00.153688 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cc861b-5221-436b-8fb2-82b729fd4334" containerName="copy" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.153706 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cc861b-5221-436b-8fb2-82b729fd4334" containerName="copy" Jan 27 14:45:00 crc kubenswrapper[4914]: E0127 14:45:00.153730 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b425ae-cbd3-4e25-becc-0a4c638599b2" containerName="extract-content" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.153738 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b425ae-cbd3-4e25-becc-0a4c638599b2" containerName="extract-content" Jan 27 14:45:00 crc kubenswrapper[4914]: E0127 14:45:00.153754 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77aaf895-7da1-4c98-8c23-cd4e2c52430b" containerName="extract-utilities" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.153763 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="77aaf895-7da1-4c98-8c23-cd4e2c52430b" containerName="extract-utilities" Jan 27 14:45:00 crc kubenswrapper[4914]: E0127 14:45:00.153778 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77aaf895-7da1-4c98-8c23-cd4e2c52430b" containerName="extract-content" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.153786 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="77aaf895-7da1-4c98-8c23-cd4e2c52430b" containerName="extract-content" Jan 27 14:45:00 crc kubenswrapper[4914]: E0127 14:45:00.153801 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bea1f1-821a-4fdb-af42-3e1759fad8d0" containerName="registry-server" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.153809 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bea1f1-821a-4fdb-af42-3e1759fad8d0" containerName="registry-server" Jan 27 14:45:00 crc kubenswrapper[4914]: E0127 14:45:00.153824 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bea1f1-821a-4fdb-af42-3e1759fad8d0" containerName="extract-utilities" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.153850 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bea1f1-821a-4fdb-af42-3e1759fad8d0" containerName="extract-utilities" Jan 27 14:45:00 crc kubenswrapper[4914]: E0127 14:45:00.153864 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b425ae-cbd3-4e25-becc-0a4c638599b2" containerName="registry-server" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.153873 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b425ae-cbd3-4e25-becc-0a4c638599b2" containerName="registry-server" Jan 27 14:45:00 crc kubenswrapper[4914]: E0127 14:45:00.153888 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bea1f1-821a-4fdb-af42-3e1759fad8d0" containerName="extract-content" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.153896 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bea1f1-821a-4fdb-af42-3e1759fad8d0" containerName="extract-content" Jan 27 14:45:00 crc kubenswrapper[4914]: E0127 14:45:00.153911 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77aaf895-7da1-4c98-8c23-cd4e2c52430b" containerName="registry-server" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.153919 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="77aaf895-7da1-4c98-8c23-cd4e2c52430b" containerName="registry-server" Jan 27 14:45:00 crc kubenswrapper[4914]: E0127 14:45:00.153932 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cc861b-5221-436b-8fb2-82b729fd4334" containerName="gather" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.153939 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cc861b-5221-436b-8fb2-82b729fd4334" containerName="gather" Jan 27 14:45:00 crc kubenswrapper[4914]: E0127 14:45:00.153959 4914 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b425ae-cbd3-4e25-becc-0a4c638599b2" containerName="extract-utilities" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.153967 4914 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b425ae-cbd3-4e25-becc-0a4c638599b2" containerName="extract-utilities" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.154162 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9bea1f1-821a-4fdb-af42-3e1759fad8d0" containerName="registry-server" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.154186 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cc861b-5221-436b-8fb2-82b729fd4334" containerName="copy" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.154207 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="77aaf895-7da1-4c98-8c23-cd4e2c52430b" containerName="registry-server" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.154218 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cc861b-5221-436b-8fb2-82b729fd4334" containerName="gather" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.154231 4914 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b425ae-cbd3-4e25-becc-0a4c638599b2" containerName="registry-server" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.155073 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.157279 4914 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.160680 4914 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.179897 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v"] Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.294467 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a9df081-a1a4-4c68-85ce-32cd01b7004d-config-volume\") pod \"collect-profiles-29492085-9925v\" (UID: \"1a9df081-a1a4-4c68-85ce-32cd01b7004d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.294583 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a9df081-a1a4-4c68-85ce-32cd01b7004d-secret-volume\") pod \"collect-profiles-29492085-9925v\" (UID: \"1a9df081-a1a4-4c68-85ce-32cd01b7004d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.294658 4914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4mx4\" (UniqueName: \"kubernetes.io/projected/1a9df081-a1a4-4c68-85ce-32cd01b7004d-kube-api-access-z4mx4\") pod \"collect-profiles-29492085-9925v\" (UID: \"1a9df081-a1a4-4c68-85ce-32cd01b7004d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.396637 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a9df081-a1a4-4c68-85ce-32cd01b7004d-secret-volume\") pod \"collect-profiles-29492085-9925v\" (UID: \"1a9df081-a1a4-4c68-85ce-32cd01b7004d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.396743 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4mx4\" (UniqueName: \"kubernetes.io/projected/1a9df081-a1a4-4c68-85ce-32cd01b7004d-kube-api-access-z4mx4\") pod \"collect-profiles-29492085-9925v\" (UID: \"1a9df081-a1a4-4c68-85ce-32cd01b7004d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.396872 4914 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a9df081-a1a4-4c68-85ce-32cd01b7004d-config-volume\") pod \"collect-profiles-29492085-9925v\" (UID: \"1a9df081-a1a4-4c68-85ce-32cd01b7004d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.397822 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a9df081-a1a4-4c68-85ce-32cd01b7004d-config-volume\") pod \"collect-profiles-29492085-9925v\" (UID: \"1a9df081-a1a4-4c68-85ce-32cd01b7004d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.403663 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a9df081-a1a4-4c68-85ce-32cd01b7004d-secret-volume\") pod \"collect-profiles-29492085-9925v\" (UID: \"1a9df081-a1a4-4c68-85ce-32cd01b7004d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.414691 4914 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4mx4\" (UniqueName: \"kubernetes.io/projected/1a9df081-a1a4-4c68-85ce-32cd01b7004d-kube-api-access-z4mx4\") pod \"collect-profiles-29492085-9925v\" (UID: \"1a9df081-a1a4-4c68-85ce-32cd01b7004d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.488780 4914 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" Jan 27 14:45:00 crc kubenswrapper[4914]: I0127 14:45:00.971367 4914 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v"] Jan 27 14:45:01 crc kubenswrapper[4914]: I0127 14:45:01.344543 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" event={"ID":"1a9df081-a1a4-4c68-85ce-32cd01b7004d","Type":"ContainerStarted","Data":"4f437e7452dd335e574ca42151f7846ef09f0d036810c7a29734bdf8079ce39e"} Jan 27 14:45:01 crc kubenswrapper[4914]: I0127 14:45:01.344598 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" event={"ID":"1a9df081-a1a4-4c68-85ce-32cd01b7004d","Type":"ContainerStarted","Data":"9f0d572c2dbc402c4e3b75e46cb411985b224f645b28dac611fe284277317938"} Jan 27 14:45:01 crc kubenswrapper[4914]: I0127 14:45:01.368073 4914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" podStartSLOduration=1.368049689 podStartE2EDuration="1.368049689s" podCreationTimestamp="2026-01-27 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:45:01.359861756 +0000 UTC m=+3659.672211851" watchObservedRunningTime="2026-01-27 14:45:01.368049689 +0000 UTC m=+3659.680399774" Jan 27 14:45:02 crc kubenswrapper[4914]: I0127 14:45:02.354200 4914 generic.go:334] "Generic (PLEG): container finished" podID="1a9df081-a1a4-4c68-85ce-32cd01b7004d" containerID="4f437e7452dd335e574ca42151f7846ef09f0d036810c7a29734bdf8079ce39e" exitCode=0 Jan 27 14:45:02 crc kubenswrapper[4914]: I0127 14:45:02.354368 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" event={"ID":"1a9df081-a1a4-4c68-85ce-32cd01b7004d","Type":"ContainerDied","Data":"4f437e7452dd335e574ca42151f7846ef09f0d036810c7a29734bdf8079ce39e"} Jan 27 14:45:03 crc kubenswrapper[4914]: I0127 14:45:03.678241 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" Jan 27 14:45:03 crc kubenswrapper[4914]: I0127 14:45:03.861043 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4mx4\" (UniqueName: \"kubernetes.io/projected/1a9df081-a1a4-4c68-85ce-32cd01b7004d-kube-api-access-z4mx4\") pod \"1a9df081-a1a4-4c68-85ce-32cd01b7004d\" (UID: \"1a9df081-a1a4-4c68-85ce-32cd01b7004d\") " Jan 27 14:45:03 crc kubenswrapper[4914]: I0127 14:45:03.861158 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a9df081-a1a4-4c68-85ce-32cd01b7004d-config-volume\") pod \"1a9df081-a1a4-4c68-85ce-32cd01b7004d\" (UID: \"1a9df081-a1a4-4c68-85ce-32cd01b7004d\") " Jan 27 14:45:03 crc kubenswrapper[4914]: I0127 14:45:03.861231 4914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a9df081-a1a4-4c68-85ce-32cd01b7004d-secret-volume\") pod \"1a9df081-a1a4-4c68-85ce-32cd01b7004d\" (UID: \"1a9df081-a1a4-4c68-85ce-32cd01b7004d\") " Jan 27 14:45:03 crc kubenswrapper[4914]: I0127 14:45:03.862650 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a9df081-a1a4-4c68-85ce-32cd01b7004d-config-volume" (OuterVolumeSpecName: "config-volume") pod "1a9df081-a1a4-4c68-85ce-32cd01b7004d" (UID: "1a9df081-a1a4-4c68-85ce-32cd01b7004d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:45:03 crc kubenswrapper[4914]: I0127 14:45:03.866954 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9df081-a1a4-4c68-85ce-32cd01b7004d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1a9df081-a1a4-4c68-85ce-32cd01b7004d" (UID: "1a9df081-a1a4-4c68-85ce-32cd01b7004d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:45:03 crc kubenswrapper[4914]: I0127 14:45:03.869336 4914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a9df081-a1a4-4c68-85ce-32cd01b7004d-kube-api-access-z4mx4" (OuterVolumeSpecName: "kube-api-access-z4mx4") pod "1a9df081-a1a4-4c68-85ce-32cd01b7004d" (UID: "1a9df081-a1a4-4c68-85ce-32cd01b7004d"). InnerVolumeSpecName "kube-api-access-z4mx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:45:03 crc kubenswrapper[4914]: I0127 14:45:03.963542 4914 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4mx4\" (UniqueName: \"kubernetes.io/projected/1a9df081-a1a4-4c68-85ce-32cd01b7004d-kube-api-access-z4mx4\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:03 crc kubenswrapper[4914]: I0127 14:45:03.963580 4914 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a9df081-a1a4-4c68-85ce-32cd01b7004d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:03 crc kubenswrapper[4914]: I0127 14:45:03.963612 4914 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a9df081-a1a4-4c68-85ce-32cd01b7004d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:04 crc kubenswrapper[4914]: I0127 14:45:04.370322 4914 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" event={"ID":"1a9df081-a1a4-4c68-85ce-32cd01b7004d","Type":"ContainerDied","Data":"9f0d572c2dbc402c4e3b75e46cb411985b224f645b28dac611fe284277317938"} Jan 27 14:45:04 crc kubenswrapper[4914]: I0127 14:45:04.370368 4914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f0d572c2dbc402c4e3b75e46cb411985b224f645b28dac611fe284277317938" Jan 27 14:45:04 crc kubenswrapper[4914]: I0127 14:45:04.370365 4914 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-9925v" Jan 27 14:45:04 crc kubenswrapper[4914]: I0127 14:45:04.483197 4914 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt"] Jan 27 14:45:04 crc kubenswrapper[4914]: I0127 14:45:04.495342 4914 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492040-8s5xt"] Jan 27 14:45:06 crc kubenswrapper[4914]: I0127 14:45:06.315262 4914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="245bc0a3-6510-45cc-8040-0b1c2435436d" path="/var/lib/kubelet/pods/245bc0a3-6510-45cc-8040-0b1c2435436d/volumes" Jan 27 14:45:07 crc kubenswrapper[4914]: I0127 14:45:07.691650 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:45:07 crc kubenswrapper[4914]: I0127 14:45:07.692187 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:45:17 crc kubenswrapper[4914]: I0127 14:45:17.019188 4914 scope.go:117] "RemoveContainer" containerID="c11acf5dd249c0e39eb4f0dfbba5ef8c775206ec3d853160956e949b15e77da3" Jan 27 14:45:37 crc kubenswrapper[4914]: I0127 14:45:37.691066 4914 patch_prober.go:28] interesting pod/machine-config-daemon-qhdfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:45:37 crc kubenswrapper[4914]: I0127 14:45:37.691669 4914 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qhdfz" podUID="bdf2dcff-9caa-45ba-98a8-0a00861bd11a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136147661024457 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136147662017375 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136140113016500 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136140114015451 5ustar corecore